Feb 16 16:59:54 crc systemd[1]: Starting Kubernetes Kubelet... Feb 16 16:59:54 crc restorecon[4681]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 16:59:54 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:55 crc restorecon[4681]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 16:59:55 crc restorecon[4681]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 16 16:59:55 crc kubenswrapper[4870]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 16:59:55 crc kubenswrapper[4870]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 16:59:55 crc kubenswrapper[4870]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 16:59:55 crc kubenswrapper[4870]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 16:59:55 crc kubenswrapper[4870]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 16:59:55 crc kubenswrapper[4870]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.975835 4870 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984077 4870 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984122 4870 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984132 4870 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984142 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984151 4870 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984159 4870 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984167 4870 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984175 4870 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984183 4870 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984191 4870 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984199 4870 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984207 4870 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984216 4870 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984225 4870 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984235 4870 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984245 4870 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984259 4870 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984271 4870 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984282 4870 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984292 4870 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984301 4870 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984309 4870 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984317 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984326 4870 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984335 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984345 4870 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984354 4870 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984367 4870 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984379 4870 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984389 4870 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984399 4870 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984409 4870 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984418 4870 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984428 4870 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984440 4870 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984451 4870 feature_gate.go:330] unrecognized feature gate: Example Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984459 4870 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984467 4870 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984474 4870 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984482 4870 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984489 4870 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984500 4870 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984513 4870 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984522 4870 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984531 4870 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984539 4870 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984549 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984557 4870 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984565 4870 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984573 4870 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984581 4870 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984589 4870 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984596 4870 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984606 4870 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984616 4870 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984625 4870 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984636 4870 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984645 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984654 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984664 4870 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984673 4870 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984680 4870 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984688 4870 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984696 4870 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984703 4870 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984718 4870 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984732 4870 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984748 4870 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984759 4870 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984772 4870 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.984783 4870 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985032 4870 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985062 4870 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985084 4870 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985099 4870 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985116 4870 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985129 4870 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985147 4870 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985163 4870 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985175 4870 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985188 4870 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985203 4870 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985216 4870 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985229 4870 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985242 4870 flags.go:64] FLAG: --cgroup-root="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985254 4870 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985266 4870 flags.go:64] FLAG: --client-ca-file="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985277 4870 flags.go:64] FLAG: --cloud-config="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985289 4870 flags.go:64] FLAG: --cloud-provider="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985300 4870 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985317 4870 flags.go:64] FLAG: --cluster-domain="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985328 4870 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985341 4870 flags.go:64] FLAG: --config-dir="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985354 4870 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985367 4870 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985383 4870 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985396 4870 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985408 4870 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985421 4870 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985433 4870 flags.go:64] FLAG: --contention-profiling="false" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985450 4870 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985463 4870 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985476 4870 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985488 4870 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985503 4870 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985516 4870 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985528 4870 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985539 4870 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985551 4870 flags.go:64] FLAG: --enable-server="true" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985564 4870 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985581 4870 flags.go:64] FLAG: --event-burst="100" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985593 4870 flags.go:64] FLAG: --event-qps="50" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985604 4870 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985615 4870 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985628 4870 flags.go:64] FLAG: --eviction-hard="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985641 4870 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985653 4870 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985664 4870 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985676 4870 flags.go:64] FLAG: --eviction-soft="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985688 4870 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985700 4870 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985711 4870 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985722 4870 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985734 4870 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985746 4870 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985757 4870 flags.go:64] FLAG: --feature-gates="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985771 4870 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985783 4870 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985795 4870 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985807 4870 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985819 4870 flags.go:64] FLAG: --healthz-port="10248" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985831 4870 flags.go:64] FLAG: --help="false" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985845 4870 flags.go:64] FLAG: --hostname-override="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985857 4870 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985869 4870 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985882 4870 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985895 4870 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985906 4870 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985918 4870 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985929 4870 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.985941 4870 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986004 4870 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986014 4870 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986024 4870 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986033 4870 flags.go:64] FLAG: --kube-reserved="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986045 4870 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986054 4870 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986065 4870 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986074 4870 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986084 4870 flags.go:64] FLAG: --lock-file="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986094 4870 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986103 4870 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986112 4870 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986127 4870 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986139 4870 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986148 4870 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986158 4870 flags.go:64] FLAG: --logging-format="text" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986167 4870 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986178 4870 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986187 4870 flags.go:64] FLAG: --manifest-url="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986196 4870 flags.go:64] FLAG: --manifest-url-header="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986210 4870 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986220 4870 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986231 4870 flags.go:64] FLAG: --max-pods="110" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986240 4870 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986251 4870 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986260 4870 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986269 4870 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986279 4870 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986288 4870 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986298 4870 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986321 4870 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986331 4870 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986340 4870 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986350 4870 flags.go:64] FLAG: --pod-cidr="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986358 4870 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986372 4870 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986381 4870 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986390 4870 flags.go:64] FLAG: --pods-per-core="0" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986399 4870 flags.go:64] FLAG: --port="10250" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986409 4870 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986418 4870 flags.go:64] FLAG: --provider-id="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986427 4870 flags.go:64] FLAG: --qos-reserved="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986436 4870 flags.go:64] FLAG: --read-only-port="10255" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986446 4870 flags.go:64] FLAG: --register-node="true" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986458 4870 flags.go:64] FLAG: --register-schedulable="true" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986470 4870 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986490 4870 flags.go:64] FLAG: --registry-burst="10" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986502 4870 flags.go:64] FLAG: --registry-qps="5" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986513 4870 flags.go:64] FLAG: --reserved-cpus="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986527 4870 flags.go:64] FLAG: --reserved-memory="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986542 4870 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986554 4870 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986577 4870 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986589 4870 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986600 4870 flags.go:64] FLAG: --runonce="false" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986612 4870 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986633 4870 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986645 4870 flags.go:64] FLAG: --seccomp-default="false" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986657 4870 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986669 4870 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986681 4870 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986693 4870 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986704 4870 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986714 4870 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986726 4870 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986738 4870 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986750 4870 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986762 4870 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986775 4870 flags.go:64] FLAG: --system-cgroups="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986786 4870 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986808 4870 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986818 4870 flags.go:64] FLAG: --tls-cert-file="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986827 4870 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986844 4870 flags.go:64] FLAG: --tls-min-version="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986855 4870 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986867 4870 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986878 4870 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986892 4870 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986905 4870 flags.go:64] FLAG: --v="2" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986920 4870 flags.go:64] FLAG: --version="false" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986936 4870 flags.go:64] FLAG: --vmodule="" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986983 4870 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.986997 4870 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988146 4870 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988170 4870 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988188 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988200 4870 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988212 4870 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988221 4870 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988230 4870 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988240 4870 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988250 4870 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988267 4870 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988284 4870 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988294 4870 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988306 4870 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988316 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988326 4870 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988337 4870 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988347 4870 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988357 4870 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988366 4870 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988377 4870 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988388 4870 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988397 4870 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988408 4870 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988419 4870 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988429 4870 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988440 4870 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988449 4870 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988459 4870 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988469 4870 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988478 4870 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988488 4870 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988495 4870 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988503 4870 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988511 4870 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988522 4870 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988533 4870 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988545 4870 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988556 4870 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988567 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988576 4870 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988584 4870 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988593 4870 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988601 4870 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988609 4870 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988617 4870 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988625 4870 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988633 4870 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988641 4870 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988648 4870 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988656 4870 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988664 4870 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988672 4870 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988679 4870 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988687 4870 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988694 4870 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988704 4870 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988714 4870 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988722 4870 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988730 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988739 4870 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988747 4870 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988757 4870 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988769 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988779 4870 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988789 4870 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988799 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988809 4870 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988819 4870 feature_gate.go:330] unrecognized feature gate: Example Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988829 4870 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988839 4870 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 16:59:55 crc kubenswrapper[4870]: W0216 16:59:55.988849 4870 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 16:59:55 crc kubenswrapper[4870]: I0216 16:59:55.988876 4870 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.000556 4870 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.000633 4870 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000747 4870 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000766 4870 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000773 4870 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000781 4870 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000788 4870 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000794 4870 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000802 4870 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000813 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000819 4870 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000825 4870 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000830 4870 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000836 4870 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000841 4870 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000847 4870 feature_gate.go:330] unrecognized feature gate: Example Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000852 4870 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000857 4870 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000862 4870 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000870 4870 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000878 4870 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000885 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000892 4870 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000899 4870 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000905 4870 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000910 4870 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000916 4870 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000922 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000927 4870 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000933 4870 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000938 4870 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000960 4870 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000966 4870 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000971 4870 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000977 4870 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000983 4870 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000990 4870 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.000996 4870 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001002 4870 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001008 4870 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001013 4870 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001019 4870 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001024 4870 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001030 4870 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001035 4870 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001040 4870 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001047 4870 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001053 4870 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001060 4870 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001067 4870 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001072 4870 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001079 4870 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001085 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001091 4870 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001098 4870 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001104 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001110 4870 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001116 4870 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001123 4870 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001129 4870 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001134 4870 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001140 4870 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001145 4870 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001151 4870 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001156 4870 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001162 4870 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001167 4870 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001172 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001178 4870 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001183 4870 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001188 4870 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001193 4870 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001200 4870 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.001210 4870 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001388 4870 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001397 4870 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001403 4870 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001410 4870 feature_gate.go:330] unrecognized feature gate: Example Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001416 4870 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001421 4870 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001427 4870 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001433 4870 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001438 4870 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001444 4870 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001450 4870 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001459 4870 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001465 4870 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001471 4870 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001477 4870 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001483 4870 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001488 4870 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001494 4870 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001500 4870 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001507 4870 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001513 4870 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001519 4870 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001524 4870 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001530 4870 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001535 4870 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001541 4870 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001546 4870 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001551 4870 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001556 4870 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001562 4870 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001567 4870 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001573 4870 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001579 4870 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001584 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001591 4870 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001598 4870 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001603 4870 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001608 4870 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001615 4870 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001624 4870 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001631 4870 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001637 4870 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001643 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001684 4870 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001695 4870 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001702 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001711 4870 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001719 4870 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001727 4870 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001735 4870 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001742 4870 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001747 4870 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001754 4870 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001759 4870 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001766 4870 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001772 4870 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001778 4870 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001785 4870 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001791 4870 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001797 4870 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001804 4870 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001811 4870 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001816 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001822 4870 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001829 4870 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001835 4870 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001842 4870 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001850 4870 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001858 4870 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001866 4870 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.001875 4870 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.001885 4870 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.003201 4870 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.011706 4870 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.011986 4870 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.014154 4870 server.go:997] "Starting client certificate rotation" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.014212 4870 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.014407 4870 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-04 20:14:13.00794743 +0000 UTC Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.014522 4870 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.046680 4870 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 16:59:56 crc kubenswrapper[4870]: E0216 16:59:56.050072 4870 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.052645 4870 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.069173 4870 log.go:25] "Validated CRI v1 runtime API" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.104460 4870 log.go:25] "Validated CRI v1 image API" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.106465 4870 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.112833 4870 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-16-16-55-02-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.112897 4870 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.135135 4870 manager.go:217] Machine: {Timestamp:2026-02-16 16:59:56.131693427 +0000 UTC m=+0.615157831 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:dab7b9c4-d71b-440c-b254-67ed578dcf0e BootID:580d9830-c978-4311-ba24-4b5d59c3355c Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:71:5f:93 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:71:5f:93 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:35:94:ee Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:2c:4f:81 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:4a:e5:83 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:b3:43:98 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:a6:85:9d:3a:df:7e Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5e:3a:5d:2c:84:0d Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.135501 4870 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.135749 4870 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.136449 4870 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.136677 4870 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.136722 4870 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.137022 4870 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.137039 4870 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.139445 4870 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.139485 4870 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.140578 4870 state_mem.go:36] "Initialized new in-memory state store" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.140723 4870 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.145610 4870 kubelet.go:418] "Attempting to sync node with API server" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.145640 4870 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.145663 4870 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.145678 4870 kubelet.go:324] "Adding apiserver pod source" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.145693 4870 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.153106 4870 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.154134 4870 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.154845 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Feb 16 16:59:56 crc kubenswrapper[4870]: E0216 16:59:56.154986 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.154979 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Feb 16 16:59:56 crc kubenswrapper[4870]: E0216 16:59:56.155088 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.156203 4870 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.157617 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.157642 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.157649 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.157658 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.157671 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.157679 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.157687 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.157700 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.157709 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.157717 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.157741 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.157754 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.158579 4870 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.159104 4870 server.go:1280] "Started kubelet" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.160213 4870 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.160265 4870 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.160797 4870 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Feb 16 16:59:56 crc systemd[1]: Started Kubernetes Kubelet. Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.161977 4870 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.162993 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.163024 4870 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.163304 4870 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.163326 4870 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.163269 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 18:48:24.889012256 +0000 UTC Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.163977 4870 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 16 16:59:56 crc kubenswrapper[4870]: E0216 16:59:56.164136 4870 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 16:59:56 crc kubenswrapper[4870]: E0216 16:59:56.166142 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="200ms" Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.168063 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Feb 16 16:59:56 crc kubenswrapper[4870]: E0216 16:59:56.168153 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.168622 4870 server.go:460] "Adding debug handlers to kubelet server" Feb 16 16:59:56 crc kubenswrapper[4870]: E0216 16:59:56.174497 4870 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.204:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894c8ac617ebac4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 16:59:56.159072964 +0000 UTC m=+0.642537348,LastTimestamp:2026-02-16 16:59:56.159072964 +0000 UTC m=+0.642537348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.176301 4870 factory.go:55] Registering systemd factory Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.176376 4870 factory.go:221] Registration of the systemd container factory successfully Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.176813 4870 factory.go:153] Registering CRI-O factory Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.176841 4870 factory.go:221] Registration of the crio container factory successfully Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.176915 4870 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.176963 4870 factory.go:103] Registering Raw factory Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.176994 4870 manager.go:1196] Started watching for new ooms in manager Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.177586 4870 manager.go:319] Starting recovery of all containers Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182327 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182391 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182408 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182423 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182439 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182454 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182482 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182497 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182518 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182532 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182546 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182560 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182576 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182592 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182605 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182618 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182636 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182648 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182663 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182682 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182696 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182710 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182724 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182737 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182765 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182782 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182824 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182835 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182848 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182859 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182872 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182883 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182895 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182922 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182933 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182966 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182981 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.182997 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183013 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183027 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183042 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183056 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183070 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183084 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183099 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183118 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183131 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183146 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183163 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183176 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183191 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183206 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183228 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183252 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183267 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183283 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183301 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183315 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183329 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183345 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183362 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183376 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183392 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183409 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183426 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183441 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183457 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183471 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183486 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183530 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183544 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183556 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183570 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183583 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183598 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183612 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183628 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183643 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183657 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183670 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183681 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183694 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183704 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183715 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183728 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183740 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183752 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183763 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183775 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183787 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183800 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183811 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183823 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183835 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183846 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183858 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183872 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183884 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183896 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183906 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183917 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183928 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183940 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183971 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.183992 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184006 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184021 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184047 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184062 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184127 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184148 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184163 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184179 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184192 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184204 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184220 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184234 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184247 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184261 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184274 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184289 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184303 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184315 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184328 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184344 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184359 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184372 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184386 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184403 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.184426 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189200 4870 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189279 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189351 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189392 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189410 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189426 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189447 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189463 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189478 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189510 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189528 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189549 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189574 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189595 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189614 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189630 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189650 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189666 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189685 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189705 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189725 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189744 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189761 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189781 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189797 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189816 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189837 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189852 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189867 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189882 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189899 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189913 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189932 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189966 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.189985 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190023 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190038 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190058 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190074 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190273 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190302 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190323 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190340 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190354 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190370 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190384 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190400 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190415 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190430 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190442 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190455 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190474 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190485 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190500 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190511 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190522 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190533 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190543 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190554 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190566 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190576 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190588 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190600 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190610 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190659 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190676 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190693 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190709 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190721 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190734 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190746 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190757 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190773 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190791 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190805 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190817 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190828 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190839 4870 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190851 4870 reconstruct.go:97] "Volume reconstruction finished" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.190859 4870 reconciler.go:26] "Reconciler: start to sync state" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.198905 4870 manager.go:324] Recovery completed Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.210476 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.212569 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.212617 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.212626 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.213629 4870 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.213651 4870 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.213672 4870 state_mem.go:36] "Initialized new in-memory state store" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.220241 4870 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.221578 4870 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.221613 4870 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.221640 4870 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 16:59:56 crc kubenswrapper[4870]: E0216 16:59:56.221797 4870 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.222453 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Feb 16 16:59:56 crc kubenswrapper[4870]: E0216 16:59:56.222539 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.237653 4870 policy_none.go:49] "None policy: Start" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.238827 4870 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.238882 4870 state_mem.go:35] "Initializing new in-memory state store" Feb 16 16:59:56 crc kubenswrapper[4870]: E0216 16:59:56.264741 4870 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.303002 4870 manager.go:334] "Starting Device Plugin manager" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.303297 4870 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.303316 4870 server.go:79] "Starting device plugin registration server" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.303732 4870 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.303746 4870 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.304087 4870 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.304173 4870 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.304183 4870 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 16:59:56 crc kubenswrapper[4870]: E0216 16:59:56.312385 4870 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.322898 4870 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.323029 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.324332 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.324374 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.324385 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.324527 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.324746 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.324822 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.325508 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.325533 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.325547 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.325697 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.325902 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.325927 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.325971 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.325981 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.325993 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.326689 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.326716 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.326727 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.326877 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.326939 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.327016 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.327271 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.327371 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.327401 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.328208 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.328228 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.328236 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.328402 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.328435 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.328449 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.328658 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.328698 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.328721 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.329655 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.329678 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.329687 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.329974 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.330008 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.330023 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.330167 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.330203 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.331022 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.331045 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.331053 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:56 crc kubenswrapper[4870]: E0216 16:59:56.368139 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="400ms" Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.386895 4870 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/cpuset.cpus.effective": open /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/cpuset.cpus.effective: no such device Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.393543 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.393592 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.393621 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.393672 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.393702 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.393754 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.393779 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.393852 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.393909 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.393933 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.393992 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.394025 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.394049 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.394079 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.394100 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.404977 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.406625 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.406658 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.406670 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.406714 4870 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 16:59:56 crc kubenswrapper[4870]: E0216 16:59:56.407262 4870 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.495364 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.495448 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.495490 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.495522 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.495547 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.495624 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.495679 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.495703 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.495744 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.495772 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.495796 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.495822 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.495887 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.495913 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.495903 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.496006 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.496037 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.496032 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.496091 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.496093 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.496015 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.496020 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.496112 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.495934 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.495905 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.496153 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.496169 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.495900 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.496053 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.496253 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.607439 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.608858 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.608905 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.608916 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.608960 4870 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 16:59:56 crc kubenswrapper[4870]: E0216 16:59:56.609371 4870 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.664030 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.670495 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.683744 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.710666 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.712890 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-2da992b03c4c8d63dac34a434ecca11a5fb38139f5e12f40127096c444b628ce WatchSource:0}: Error finding container 2da992b03c4c8d63dac34a434ecca11a5fb38139f5e12f40127096c444b628ce: Status 404 returned error can't find the container with id 2da992b03c4c8d63dac34a434ecca11a5fb38139f5e12f40127096c444b628ce Feb 16 16:59:56 crc kubenswrapper[4870]: I0216 16:59:56.720336 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.723733 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-07fcf210b3a84395d90ea67d0de958fdcc87d5b6185881c746e5843c191e7423 WatchSource:0}: Error finding container 07fcf210b3a84395d90ea67d0de958fdcc87d5b6185881c746e5843c191e7423: Status 404 returned error can't find the container with id 07fcf210b3a84395d90ea67d0de958fdcc87d5b6185881c746e5843c191e7423 Feb 16 16:59:56 crc kubenswrapper[4870]: W0216 16:59:56.738353 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-6983168d268e318b9b6f438db46e401cde88e4909f462962ce1d28d4afab67ad WatchSource:0}: Error finding container 6983168d268e318b9b6f438db46e401cde88e4909f462962ce1d28d4afab67ad: Status 404 returned error can't find the container with id 6983168d268e318b9b6f438db46e401cde88e4909f462962ce1d28d4afab67ad Feb 16 16:59:56 crc kubenswrapper[4870]: E0216 16:59:56.769749 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="800ms" Feb 16 16:59:57 crc kubenswrapper[4870]: I0216 16:59:57.009911 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:57 crc kubenswrapper[4870]: I0216 16:59:57.011887 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:57 crc kubenswrapper[4870]: I0216 16:59:57.011997 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:57 crc kubenswrapper[4870]: I0216 16:59:57.012016 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:57 crc kubenswrapper[4870]: I0216 16:59:57.012054 4870 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 16:59:57 crc kubenswrapper[4870]: E0216 16:59:57.012873 4870 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Feb 16 16:59:57 crc kubenswrapper[4870]: W0216 16:59:57.088171 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Feb 16 16:59:57 crc kubenswrapper[4870]: E0216 16:59:57.088256 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:57 crc kubenswrapper[4870]: I0216 16:59:57.162492 4870 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Feb 16 16:59:57 crc kubenswrapper[4870]: I0216 16:59:57.163468 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 07:59:42.377001582 +0000 UTC Feb 16 16:59:57 crc kubenswrapper[4870]: I0216 16:59:57.226468 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"962450b9572970bd144642a2d449ac7dcfe1be56199183808bd00553444cf541"} Feb 16 16:59:57 crc kubenswrapper[4870]: I0216 16:59:57.227475 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b629b5a71c71d2dfa4c5b31e0ea4f2c063e2fcbb9f5cf7ad59f10c8ec78828e1"} Feb 16 16:59:57 crc kubenswrapper[4870]: I0216 16:59:57.228350 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2da992b03c4c8d63dac34a434ecca11a5fb38139f5e12f40127096c444b628ce"} Feb 16 16:59:57 crc kubenswrapper[4870]: I0216 16:59:57.229148 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6983168d268e318b9b6f438db46e401cde88e4909f462962ce1d28d4afab67ad"} Feb 16 16:59:57 crc kubenswrapper[4870]: I0216 16:59:57.230653 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"07fcf210b3a84395d90ea67d0de958fdcc87d5b6185881c746e5843c191e7423"} Feb 16 16:59:57 crc kubenswrapper[4870]: W0216 16:59:57.417781 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Feb 16 16:59:57 crc kubenswrapper[4870]: E0216 16:59:57.417868 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:57 crc kubenswrapper[4870]: W0216 16:59:57.507780 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Feb 16 16:59:57 crc kubenswrapper[4870]: E0216 16:59:57.508316 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:57 crc kubenswrapper[4870]: E0216 16:59:57.571080 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="1.6s" Feb 16 16:59:57 crc kubenswrapper[4870]: W0216 16:59:57.732906 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Feb 16 16:59:57 crc kubenswrapper[4870]: E0216 16:59:57.733053 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:57 crc kubenswrapper[4870]: I0216 16:59:57.813864 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:57 crc kubenswrapper[4870]: I0216 16:59:57.822244 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:57 crc kubenswrapper[4870]: I0216 16:59:57.822307 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:57 crc kubenswrapper[4870]: I0216 16:59:57.822328 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:57 crc kubenswrapper[4870]: I0216 16:59:57.822364 4870 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 16:59:57 crc kubenswrapper[4870]: E0216 16:59:57.823258 4870 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.162309 4870 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.164260 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 15:54:37.793872147 +0000 UTC Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.226463 4870 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 16:59:58 crc kubenswrapper[4870]: E0216 16:59:58.227634 4870 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.236034 4870 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="89843858f314702b13c9436980c8c50c088761d78060e9e7b79182ef78cede81" exitCode=0 Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.236143 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"89843858f314702b13c9436980c8c50c088761d78060e9e7b79182ef78cede81"} Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.236192 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.237599 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.237635 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.237651 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.238842 4870 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="5cd45a46ea1111cb66e4e31da8f8b3905d1d203b93f16247aab378df4eec5481" exitCode=0 Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.238973 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"5cd45a46ea1111cb66e4e31da8f8b3905d1d203b93f16247aab378df4eec5481"} Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.238984 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.240128 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.240172 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.240181 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.243753 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d"} Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.243837 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576"} Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.243866 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291"} Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.246326 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.246424 4870 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c" exitCode=0 Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.246543 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c"} Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.247672 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.247709 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.247721 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.249607 4870 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04" exitCode=0 Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.249648 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04"} Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.249741 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.249741 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.250983 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.251013 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.251029 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.251015 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.251116 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:58 crc kubenswrapper[4870]: I0216 16:59:58.251130 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.161522 4870 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.164551 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 18:25:55.850849245 +0000 UTC Feb 16 16:59:59 crc kubenswrapper[4870]: E0216 16:59:59.173311 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="3.2s" Feb 16 16:59:59 crc kubenswrapper[4870]: W0216 16:59:59.231393 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Feb 16 16:59:59 crc kubenswrapper[4870]: E0216 16:59:59.231536 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.256217 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d"} Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.256268 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2"} Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.256283 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87"} Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.256302 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee"} Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.259149 4870 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea" exitCode=0 Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.259196 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea"} Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.259344 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.260699 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.260731 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.260740 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.265822 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.265833 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"25f3cdf90be2ee9e1b5b1eff2e81b1f42985106dfcba9a9aab0cc27bfa908a2d"} Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.269410 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.269521 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.269588 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.271135 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7349c946be2d324b8b6393304de50fff9f89a1143c1d4249e2c8bfacb1df1251"} Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.271264 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"22759612b983e0c11f0aef2e271a252d00fa8bcfafabf5176d17d29ba811f485"} Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.271374 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a830b7770df5d6eb09cd233bf335c59c23fcc311afc7b2fb2df511ee4f7a1f79"} Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.271226 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.272371 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.272414 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.272425 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.274446 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717"} Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.274647 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.275581 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.275615 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.275631 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.394377 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.423740 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.424971 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.425027 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.425040 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 16:59:59 crc kubenswrapper[4870]: I0216 16:59:59.425071 4870 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 16:59:59 crc kubenswrapper[4870]: E0216 16:59:59.425750 4870 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.204:6443: connect: connection refused" node="crc" Feb 16 16:59:59 crc kubenswrapper[4870]: W0216 16:59:59.436125 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.204:6443: connect: connection refused Feb 16 16:59:59 crc kubenswrapper[4870]: E0216 16:59:59.436220 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.204:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:59:59 crc kubenswrapper[4870]: E0216 16:59:59.513483 4870 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.204:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894c8ac617ebac4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 16:59:56.159072964 +0000 UTC m=+0.642537348,LastTimestamp:2026-02-16 16:59:56.159072964 +0000 UTC m=+0.642537348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.149886 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.165106 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 10:06:21.685129209 +0000 UTC Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.279517 4870 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059" exitCode=0 Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.279593 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059"} Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.279718 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.280774 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.280799 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.280810 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.282843 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2c3c91bf2ee068b04fa13fd529cc494c11f726f68295eee7edb2ecc3f078453a"} Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.282859 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.282922 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.282878 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.282993 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.285091 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.285148 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.285169 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.286309 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.286346 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.286390 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.290318 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.290366 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.290378 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.290391 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.290423 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.290436 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:00 crc kubenswrapper[4870]: I0216 17:00:00.945025 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 17:00:01 crc kubenswrapper[4870]: I0216 17:00:01.166012 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:52:18.553072723 +0000 UTC Feb 16 17:00:01 crc kubenswrapper[4870]: I0216 17:00:01.290818 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083"} Feb 16 17:00:01 crc kubenswrapper[4870]: I0216 17:00:01.290873 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388"} Feb 16 17:00:01 crc kubenswrapper[4870]: I0216 17:00:01.290884 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc"} Feb 16 17:00:01 crc kubenswrapper[4870]: I0216 17:00:01.290893 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9"} Feb 16 17:00:01 crc kubenswrapper[4870]: I0216 17:00:01.290907 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:01 crc kubenswrapper[4870]: I0216 17:00:01.291027 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:01 crc kubenswrapper[4870]: I0216 17:00:01.291047 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:01 crc kubenswrapper[4870]: I0216 17:00:01.291067 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:00:01 crc kubenswrapper[4870]: I0216 17:00:01.292061 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:01 crc kubenswrapper[4870]: I0216 17:00:01.292087 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:01 crc kubenswrapper[4870]: I0216 17:00:01.292096 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:01 crc kubenswrapper[4870]: I0216 17:00:01.292154 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:01 crc kubenswrapper[4870]: I0216 17:00:01.292186 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:01 crc kubenswrapper[4870]: I0216 17:00:01.292195 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:01 crc kubenswrapper[4870]: I0216 17:00:01.292501 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:01 crc kubenswrapper[4870]: I0216 17:00:01.292530 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:01 crc kubenswrapper[4870]: I0216 17:00:01.292542 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:01 crc kubenswrapper[4870]: I0216 17:00:01.481134 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:00:02 crc kubenswrapper[4870]: I0216 17:00:02.166629 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 04:31:01.646912866 +0000 UTC Feb 16 17:00:02 crc kubenswrapper[4870]: I0216 17:00:02.298332 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425"} Feb 16 17:00:02 crc kubenswrapper[4870]: I0216 17:00:02.298439 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:02 crc kubenswrapper[4870]: I0216 17:00:02.298487 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:02 crc kubenswrapper[4870]: I0216 17:00:02.298575 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:02 crc kubenswrapper[4870]: I0216 17:00:02.299506 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:02 crc kubenswrapper[4870]: I0216 17:00:02.299541 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:02 crc kubenswrapper[4870]: I0216 17:00:02.299544 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:02 crc kubenswrapper[4870]: I0216 17:00:02.299578 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:02 crc kubenswrapper[4870]: I0216 17:00:02.299588 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:02 crc kubenswrapper[4870]: I0216 17:00:02.299554 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:02 crc kubenswrapper[4870]: I0216 17:00:02.300237 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:02 crc kubenswrapper[4870]: I0216 17:00:02.300279 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:02 crc kubenswrapper[4870]: I0216 17:00:02.300304 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:02 crc kubenswrapper[4870]: I0216 17:00:02.369377 4870 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 17:00:02 crc kubenswrapper[4870]: I0216 17:00:02.626272 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:02 crc kubenswrapper[4870]: I0216 17:00:02.628163 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:02 crc kubenswrapper[4870]: I0216 17:00:02.628210 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:02 crc kubenswrapper[4870]: I0216 17:00:02.628230 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:02 crc kubenswrapper[4870]: I0216 17:00:02.628263 4870 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 17:00:03 crc kubenswrapper[4870]: I0216 17:00:03.062400 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 17:00:03 crc kubenswrapper[4870]: I0216 17:00:03.071772 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 17:00:03 crc kubenswrapper[4870]: I0216 17:00:03.167591 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 21:20:14.443635468 +0000 UTC Feb 16 17:00:03 crc kubenswrapper[4870]: I0216 17:00:03.301415 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:03 crc kubenswrapper[4870]: I0216 17:00:03.301715 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:03 crc kubenswrapper[4870]: I0216 17:00:03.301863 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:03 crc kubenswrapper[4870]: I0216 17:00:03.305360 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:03 crc kubenswrapper[4870]: I0216 17:00:03.305411 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:03 crc kubenswrapper[4870]: I0216 17:00:03.305424 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:03 crc kubenswrapper[4870]: I0216 17:00:03.305366 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:03 crc kubenswrapper[4870]: I0216 17:00:03.305504 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:03 crc kubenswrapper[4870]: I0216 17:00:03.305520 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:03 crc kubenswrapper[4870]: I0216 17:00:03.305451 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:03 crc kubenswrapper[4870]: I0216 17:00:03.305577 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:03 crc kubenswrapper[4870]: I0216 17:00:03.305593 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:03 crc kubenswrapper[4870]: I0216 17:00:03.945614 4870 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 17:00:03 crc kubenswrapper[4870]: I0216 17:00:03.945728 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 17:00:04 crc kubenswrapper[4870]: I0216 17:00:04.168280 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 17:37:23.356877877 +0000 UTC Feb 16 17:00:04 crc kubenswrapper[4870]: I0216 17:00:04.303692 4870 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:00:04 crc kubenswrapper[4870]: I0216 17:00:04.303756 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:04 crc kubenswrapper[4870]: I0216 17:00:04.304897 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:04 crc kubenswrapper[4870]: I0216 17:00:04.304933 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:04 crc kubenswrapper[4870]: I0216 17:00:04.304966 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:04 crc kubenswrapper[4870]: I0216 17:00:04.722340 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:00:04 crc kubenswrapper[4870]: I0216 17:00:04.722581 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:04 crc kubenswrapper[4870]: I0216 17:00:04.724189 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:04 crc kubenswrapper[4870]: I0216 17:00:04.724222 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:04 crc kubenswrapper[4870]: I0216 17:00:04.724235 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:05 crc kubenswrapper[4870]: I0216 17:00:05.169174 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:05:45.128525376 +0000 UTC Feb 16 17:00:06 crc kubenswrapper[4870]: I0216 17:00:06.058816 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 17:00:06 crc kubenswrapper[4870]: I0216 17:00:06.059095 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:06 crc kubenswrapper[4870]: I0216 17:00:06.060686 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:06 crc kubenswrapper[4870]: I0216 17:00:06.060758 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:06 crc kubenswrapper[4870]: I0216 17:00:06.060783 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:06 crc kubenswrapper[4870]: I0216 17:00:06.169844 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:55:45.714566071 +0000 UTC Feb 16 17:00:06 crc kubenswrapper[4870]: I0216 17:00:06.272815 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 16 17:00:06 crc kubenswrapper[4870]: I0216 17:00:06.273178 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:06 crc kubenswrapper[4870]: I0216 17:00:06.274744 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:06 crc kubenswrapper[4870]: I0216 17:00:06.274795 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:06 crc kubenswrapper[4870]: I0216 17:00:06.274812 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:06 crc kubenswrapper[4870]: E0216 17:00:06.312541 4870 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 17:00:07 crc kubenswrapper[4870]: I0216 17:00:07.166405 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 16 17:00:07 crc kubenswrapper[4870]: I0216 17:00:07.166635 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:07 crc kubenswrapper[4870]: I0216 17:00:07.167860 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:07 crc kubenswrapper[4870]: I0216 17:00:07.167895 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:07 crc kubenswrapper[4870]: I0216 17:00:07.167905 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:07 crc kubenswrapper[4870]: I0216 17:00:07.171456 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 12:53:06.514714884 +0000 UTC Feb 16 17:00:08 crc kubenswrapper[4870]: I0216 17:00:08.171984 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 03:52:26.219159766 +0000 UTC Feb 16 17:00:09 crc kubenswrapper[4870]: I0216 17:00:09.173147 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 00:45:38.210204219 +0000 UTC Feb 16 17:00:10 crc kubenswrapper[4870]: I0216 17:00:10.162673 4870 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 16 17:00:10 crc kubenswrapper[4870]: I0216 17:00:10.174181 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 06:36:58.61851432 +0000 UTC Feb 16 17:00:10 crc kubenswrapper[4870]: I0216 17:00:10.212007 4870 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:32874->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 16 17:00:10 crc kubenswrapper[4870]: I0216 17:00:10.212102 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:32874->192.168.126.11:17697: read: connection reset by peer" Feb 16 17:00:10 crc kubenswrapper[4870]: W0216 17:00:10.276095 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 17:00:10 crc kubenswrapper[4870]: I0216 17:00:10.276233 4870 trace.go:236] Trace[2135065575]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 17:00:00.274) (total time: 10001ms): Feb 16 17:00:10 crc kubenswrapper[4870]: Trace[2135065575]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:00:10.276) Feb 16 17:00:10 crc kubenswrapper[4870]: Trace[2135065575]: [10.001203539s] [10.001203539s] END Feb 16 17:00:10 crc kubenswrapper[4870]: E0216 17:00:10.276270 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 17:00:10 crc kubenswrapper[4870]: I0216 17:00:10.319403 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 17:00:10 crc kubenswrapper[4870]: I0216 17:00:10.320848 4870 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2c3c91bf2ee068b04fa13fd529cc494c11f726f68295eee7edb2ecc3f078453a" exitCode=255 Feb 16 17:00:10 crc kubenswrapper[4870]: I0216 17:00:10.320900 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2c3c91bf2ee068b04fa13fd529cc494c11f726f68295eee7edb2ecc3f078453a"} Feb 16 17:00:10 crc kubenswrapper[4870]: I0216 17:00:10.321078 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:10 crc kubenswrapper[4870]: I0216 17:00:10.321839 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:10 crc kubenswrapper[4870]: I0216 17:00:10.321871 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:10 crc kubenswrapper[4870]: I0216 17:00:10.321885 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:10 crc kubenswrapper[4870]: I0216 17:00:10.322483 4870 scope.go:117] "RemoveContainer" containerID="2c3c91bf2ee068b04fa13fd529cc494c11f726f68295eee7edb2ecc3f078453a" Feb 16 17:00:10 crc kubenswrapper[4870]: W0216 17:00:10.374399 4870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 17:00:10 crc kubenswrapper[4870]: I0216 17:00:10.374934 4870 trace.go:236] Trace[1749688798]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 17:00:00.372) (total time: 10002ms): Feb 16 17:00:10 crc kubenswrapper[4870]: Trace[1749688798]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:00:10.374) Feb 16 17:00:10 crc kubenswrapper[4870]: Trace[1749688798]: [10.002151525s] [10.002151525s] END Feb 16 17:00:10 crc kubenswrapper[4870]: E0216 17:00:10.374995 4870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 17:00:11 crc kubenswrapper[4870]: I0216 17:00:11.174616 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 23:12:35.369238879 +0000 UTC Feb 16 17:00:11 crc kubenswrapper[4870]: I0216 17:00:11.253476 4870 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 17:00:11 crc kubenswrapper[4870]: I0216 17:00:11.253829 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 17:00:11 crc kubenswrapper[4870]: I0216 17:00:11.260604 4870 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 17:00:11 crc kubenswrapper[4870]: I0216 17:00:11.261003 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 17:00:12 crc kubenswrapper[4870]: I0216 17:00:12.048404 4870 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]log ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]etcd ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/priority-and-fairness-filter ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/start-apiextensions-informers ok Feb 16 17:00:12 crc kubenswrapper[4870]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Feb 16 17:00:12 crc kubenswrapper[4870]: [-]poststarthook/crd-informer-synced failed: reason withheld Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/start-system-namespaces-controller ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 16 17:00:12 crc kubenswrapper[4870]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 16 17:00:12 crc kubenswrapper[4870]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/bootstrap-controller ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/start-kube-aggregator-informers ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/apiservice-registration-controller ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/apiservice-discovery-controller ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]autoregister-completion ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/apiservice-openapi-controller ok Feb 16 17:00:12 crc kubenswrapper[4870]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 16 17:00:12 crc kubenswrapper[4870]: livez check failed Feb 16 17:00:12 crc kubenswrapper[4870]: I0216 17:00:12.048478 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:00:12 crc kubenswrapper[4870]: I0216 17:00:12.175425 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 14:37:47.573304032 +0000 UTC Feb 16 17:00:12 crc kubenswrapper[4870]: I0216 17:00:12.328245 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 17:00:12 crc kubenswrapper[4870]: I0216 17:00:12.329836 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45"} Feb 16 17:00:12 crc kubenswrapper[4870]: I0216 17:00:12.330106 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:12 crc kubenswrapper[4870]: I0216 17:00:12.331481 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:12 crc kubenswrapper[4870]: I0216 17:00:12.331522 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:12 crc kubenswrapper[4870]: I0216 17:00:12.331535 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:13 crc kubenswrapper[4870]: I0216 17:00:13.176369 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 04:34:01.460286784 +0000 UTC Feb 16 17:00:13 crc kubenswrapper[4870]: I0216 17:00:13.334691 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 17:00:13 crc kubenswrapper[4870]: I0216 17:00:13.335145 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 17:00:13 crc kubenswrapper[4870]: I0216 17:00:13.336845 4870 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45" exitCode=255 Feb 16 17:00:13 crc kubenswrapper[4870]: I0216 17:00:13.336892 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45"} Feb 16 17:00:13 crc kubenswrapper[4870]: I0216 17:00:13.336978 4870 scope.go:117] "RemoveContainer" containerID="2c3c91bf2ee068b04fa13fd529cc494c11f726f68295eee7edb2ecc3f078453a" Feb 16 17:00:13 crc kubenswrapper[4870]: I0216 17:00:13.337196 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:13 crc kubenswrapper[4870]: I0216 17:00:13.338595 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:13 crc kubenswrapper[4870]: I0216 17:00:13.338633 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:13 crc kubenswrapper[4870]: I0216 17:00:13.338645 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:13 crc kubenswrapper[4870]: I0216 17:00:13.339267 4870 scope.go:117] "RemoveContainer" containerID="e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45" Feb 16 17:00:13 crc kubenswrapper[4870]: E0216 17:00:13.339463 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 17:00:13 crc kubenswrapper[4870]: I0216 17:00:13.945900 4870 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 17:00:13 crc kubenswrapper[4870]: I0216 17:00:13.946076 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 17:00:14 crc kubenswrapper[4870]: I0216 17:00:14.177442 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 20:49:28.6216183 +0000 UTC Feb 16 17:00:14 crc kubenswrapper[4870]: I0216 17:00:14.342836 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 17:00:14 crc kubenswrapper[4870]: I0216 17:00:14.634829 4870 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.111681 4870 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.157402 4870 apiserver.go:52] "Watching apiserver" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.163076 4870 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.163640 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.164245 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.164338 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:15 crc kubenswrapper[4870]: E0216 17:00:15.164445 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.164484 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.164612 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 17:00:15 crc kubenswrapper[4870]: E0216 17:00:15.164658 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.164798 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.165375 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:15 crc kubenswrapper[4870]: E0216 17:00:15.165521 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.166897 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.167219 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.167384 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.168559 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.168566 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.168914 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.169517 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.169703 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.173429 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.178275 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 12:41:34.129107022 +0000 UTC Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.208236 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.223231 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.236391 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.249760 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.260093 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.264763 4870 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.273571 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.288226 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:15 crc kubenswrapper[4870]: I0216 17:00:15.303085 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.066716 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.087942 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.089961 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.106798 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.120309 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.132932 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.144050 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.162636 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.179263 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:44:02.937051656 +0000 UTC Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.222803 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.223018 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.238667 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.252150 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.254411 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.255860 4870 trace.go:236] Trace[110235410]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 17:00:04.120) (total time: 12135ms): Feb 16 17:00:16 crc kubenswrapper[4870]: Trace[110235410]: ---"Objects listed" error: 12135ms (17:00:16.255) Feb 16 17:00:16 crc kubenswrapper[4870]: Trace[110235410]: [12.135347061s] [12.135347061s] END Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.256081 4870 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.256038 4870 trace.go:236] Trace[841885193]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 17:00:03.748) (total time: 12507ms): Feb 16 17:00:16 crc kubenswrapper[4870]: Trace[841885193]: ---"Objects listed" error: 12507ms (17:00:16.255) Feb 16 17:00:16 crc kubenswrapper[4870]: Trace[841885193]: [12.507487825s] [12.507487825s] END Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.256479 4870 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.257723 4870 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.258721 4870 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.265212 4870 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.270447 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.284200 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.294631 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.306981 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.322225 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.358635 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.358704 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.358732 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.358760 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.358783 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.358809 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.358832 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.358858 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.358879 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.358904 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.358928 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.358968 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.358991 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359054 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359076 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359118 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359112 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359142 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359237 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359266 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359289 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359373 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359398 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359417 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359452 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359472 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359493 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359510 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359529 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359548 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359544 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359566 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359589 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359607 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359626 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359644 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359663 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359695 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359714 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359753 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359770 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359782 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359806 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359830 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359852 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359875 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359900 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359924 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359962 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.359986 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360017 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360033 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360038 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360049 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360103 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360123 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360140 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360157 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360172 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360187 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360203 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360232 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360249 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360267 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360284 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360300 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360318 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360336 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360351 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360367 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360386 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360414 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360432 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360451 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360468 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360486 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360506 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360522 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360554 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360574 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360592 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360619 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360636 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360653 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360669 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360685 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360701 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360771 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360792 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360809 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360824 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360841 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360856 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360877 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360894 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360916 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360933 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.360980 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361004 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361030 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361052 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361059 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361070 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361127 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361128 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361155 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361255 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361314 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361358 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361362 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361408 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361434 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361453 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361462 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361509 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361537 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361563 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361585 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361606 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361624 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361643 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361664 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361683 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361703 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361721 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361738 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361755 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361773 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361772 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361793 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361814 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361831 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361849 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361867 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361884 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361900 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361917 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361936 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362132 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362152 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362170 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362186 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362205 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362242 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362260 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362276 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362308 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362325 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362341 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362358 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362375 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362390 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362406 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362426 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362442 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362463 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362480 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362498 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362517 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362534 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362552 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362569 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362604 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362641 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362658 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362676 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362692 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362708 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362726 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362742 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362759 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362776 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362793 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362810 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362828 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362845 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362867 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362888 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362906 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362924 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362955 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362972 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362988 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363005 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363024 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363042 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363059 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363077 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363096 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363113 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363150 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363168 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363186 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363203 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363226 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363244 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363261 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363279 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363295 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363313 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363330 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363362 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363379 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363397 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363416 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363464 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363490 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363510 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363532 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363557 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363594 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363614 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363633 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363655 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363685 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363718 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363740 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363764 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363794 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363868 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363880 4870 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363892 4870 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363902 4870 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363913 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363924 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363934 4870 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363961 4870 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363974 4870 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.364728 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.361990 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362480 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.380305 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362618 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.362748 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363169 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363419 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363626 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.363818 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.364157 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.364536 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.364700 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.364840 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.365031 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.365054 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.365196 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.365277 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.365370 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.365509 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.365522 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.365768 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.366709 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.367150 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.367200 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.367448 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.371129 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.371321 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.371430 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.371709 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.371856 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.372039 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.372532 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.372732 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.372743 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.372860 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.373145 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.373179 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.373393 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.373404 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.373384 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.373621 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.368772 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.373655 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.373845 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.373864 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.374143 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.374229 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.377290 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.378829 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.379750 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.380631 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.381195 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.381502 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.381539 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.381641 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.381898 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.381929 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.382063 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.382304 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.383870 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.384004 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.380420 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.380594 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.385587 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.385639 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.385650 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.385821 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.386050 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.386300 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.386311 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.386471 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.386470 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.386833 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.386683 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.386966 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.387033 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.386841 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.387199 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.387247 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.387540 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.387561 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.385834 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.387838 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.387861 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.388014 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.388086 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.388096 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.388537 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.388650 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.388726 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.388735 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.388924 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.389150 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.389422 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.389454 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.389537 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.382039 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.390039 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.390091 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.381974 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.381971 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.390159 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.390304 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.390335 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.390445 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.390630 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.390668 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.391858 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.390702 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.390769 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.391008 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.391176 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.391262 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.392009 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.392072 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.391408 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:00:16.891376493 +0000 UTC m=+21.374840877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.391298 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.391432 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.391503 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.391626 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.391649 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.391731 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.391881 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.392337 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.392517 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.392572 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.392869 4870 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.392940 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.392984 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:16.892938276 +0000 UTC m=+21.376402730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.393221 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.393329 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.393576 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.393822 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.393932 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.394137 4870 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.394461 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.394698 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.394574 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.394992 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.394470 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.395199 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.395208 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.395623 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.395705 4870 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.396175 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.396195 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:16.896178266 +0000 UTC m=+21.379642740 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.395927 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.396012 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.396416 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.396497 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.396732 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.396883 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.397042 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.397129 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.397175 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.397539 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.397549 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.397781 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.398305 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.397909 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.400392 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.407743 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.408259 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.408286 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.408301 4870 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.408373 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:16.908352581 +0000 UTC m=+21.391816965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.408475 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.408814 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.408834 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.408846 4870 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.408887 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:16.908876836 +0000 UTC m=+21.392341280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.409323 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.412271 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.412668 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.414876 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.415267 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.416467 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.416936 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.417459 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.417489 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.417595 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.418013 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.418042 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.418072 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.418565 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.418615 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.420695 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.420717 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.422283 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.422579 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.424322 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.424349 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.425194 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.425249 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.425425 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.425444 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.425445 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.425779 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.425842 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.426043 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.427174 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.428012 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.442079 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465639 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465692 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465758 4870 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465773 4870 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465784 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465800 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465811 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465820 4870 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465831 4870 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465841 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465851 4870 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465862 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465872 4870 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465882 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465894 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465904 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465913 4870 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465922 4870 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465930 4870 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465938 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465962 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465970 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465980 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465989 4870 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.465999 4870 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466009 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466019 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466029 4870 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466038 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466046 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466055 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466064 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466073 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466084 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466092 4870 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466100 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466109 4870 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466117 4870 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466125 4870 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466134 4870 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466142 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466176 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466184 4870 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466192 4870 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466202 4870 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466210 4870 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466219 4870 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466229 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466238 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466247 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466256 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466265 4870 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466274 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466285 4870 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466294 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466303 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466313 4870 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466323 4870 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466333 4870 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466344 4870 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466354 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466351 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466364 4870 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466448 4870 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466465 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466480 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466473 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466494 4870 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466563 4870 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466574 4870 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466586 4870 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466597 4870 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466607 4870 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466617 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466627 4870 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466636 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466645 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466662 4870 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466671 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466679 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466689 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466699 4870 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466708 4870 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466717 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466725 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466734 4870 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466742 4870 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466752 4870 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466762 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466771 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466783 4870 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466792 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466801 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466811 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466821 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466831 4870 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466840 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466849 4870 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466859 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466868 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466878 4870 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466886 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466895 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466904 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466912 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466921 4870 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466929 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466938 4870 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466964 4870 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466973 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466985 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.466996 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467006 4870 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467020 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467033 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467042 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467051 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467061 4870 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467071 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467080 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467089 4870 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467098 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467107 4870 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467126 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467136 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467145 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467154 4870 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467163 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467171 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467181 4870 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467190 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467200 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467210 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467218 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467243 4870 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467255 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467266 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467279 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467289 4870 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467297 4870 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467307 4870 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467316 4870 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467325 4870 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467334 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467344 4870 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467355 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467364 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467374 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467383 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467393 4870 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467403 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467413 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467422 4870 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467434 4870 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467444 4870 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467454 4870 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467464 4870 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467474 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467483 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467493 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467502 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467512 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467523 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467532 4870 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467542 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467552 4870 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467562 4870 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467572 4870 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467580 4870 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467590 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467599 4870 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467608 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467619 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467628 4870 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467636 4870 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467646 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467655 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467663 4870 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467672 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467682 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467690 4870 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467699 4870 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467708 4870 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467716 4870 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467725 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467734 4870 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.467742 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.472806 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.479901 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.488581 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.506924 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.509135 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.524806 4870 scope.go:117] "RemoveContainer" containerID="e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45" Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.525035 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.528692 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.546663 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.564610 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.568455 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.568493 4870 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.575034 4870 csr.go:261] certificate signing request csr-fbcwz is approved, waiting to be issued Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.576905 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.597618 4870 csr.go:257] certificate signing request csr-fbcwz is issued Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.598317 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.609539 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.620440 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.624372 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-9zmm6"] Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.624889 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9zmm6" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.625480 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-bhb7f"] Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.625873 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bhb7f" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.627240 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.632298 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.632498 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.632644 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.632785 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.633697 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.634306 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.638494 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.660342 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.678611 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.685461 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.693668 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.703243 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.704200 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.713529 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.731304 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.744624 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.767613 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.769866 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b090777-a023-4789-877e-55d3f30e65f2-host\") pod \"node-ca-9zmm6\" (UID: \"9b090777-a023-4789-877e-55d3f30e65f2\") " pod="openshift-image-registry/node-ca-9zmm6" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.769942 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9b090777-a023-4789-877e-55d3f30e65f2-serviceca\") pod \"node-ca-9zmm6\" (UID: \"9b090777-a023-4789-877e-55d3f30e65f2\") " pod="openshift-image-registry/node-ca-9zmm6" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.769982 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6ecaee09-f493-4280-9dca-5c0b127c137a-hosts-file\") pod \"node-resolver-bhb7f\" (UID: \"6ecaee09-f493-4280-9dca-5c0b127c137a\") " pod="openshift-dns/node-resolver-bhb7f" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.770003 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnwq5\" (UniqueName: \"kubernetes.io/projected/6ecaee09-f493-4280-9dca-5c0b127c137a-kube-api-access-wnwq5\") pod \"node-resolver-bhb7f\" (UID: \"6ecaee09-f493-4280-9dca-5c0b127c137a\") " pod="openshift-dns/node-resolver-bhb7f" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.770079 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcvcx\" (UniqueName: \"kubernetes.io/projected/9b090777-a023-4789-877e-55d3f30e65f2-kube-api-access-rcvcx\") pod \"node-ca-9zmm6\" (UID: \"9b090777-a023-4789-877e-55d3f30e65f2\") " pod="openshift-image-registry/node-ca-9zmm6" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.784843 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.816402 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.868242 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.871449 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9b090777-a023-4789-877e-55d3f30e65f2-serviceca\") pod \"node-ca-9zmm6\" (UID: \"9b090777-a023-4789-877e-55d3f30e65f2\") " pod="openshift-image-registry/node-ca-9zmm6" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.871483 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6ecaee09-f493-4280-9dca-5c0b127c137a-hosts-file\") pod \"node-resolver-bhb7f\" (UID: \"6ecaee09-f493-4280-9dca-5c0b127c137a\") " pod="openshift-dns/node-resolver-bhb7f" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.871504 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnwq5\" (UniqueName: \"kubernetes.io/projected/6ecaee09-f493-4280-9dca-5c0b127c137a-kube-api-access-wnwq5\") pod \"node-resolver-bhb7f\" (UID: \"6ecaee09-f493-4280-9dca-5c0b127c137a\") " pod="openshift-dns/node-resolver-bhb7f" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.871534 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcvcx\" (UniqueName: \"kubernetes.io/projected/9b090777-a023-4789-877e-55d3f30e65f2-kube-api-access-rcvcx\") pod \"node-ca-9zmm6\" (UID: \"9b090777-a023-4789-877e-55d3f30e65f2\") " pod="openshift-image-registry/node-ca-9zmm6" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.871560 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b090777-a023-4789-877e-55d3f30e65f2-host\") pod \"node-ca-9zmm6\" (UID: \"9b090777-a023-4789-877e-55d3f30e65f2\") " pod="openshift-image-registry/node-ca-9zmm6" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.871602 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b090777-a023-4789-877e-55d3f30e65f2-host\") pod \"node-ca-9zmm6\" (UID: \"9b090777-a023-4789-877e-55d3f30e65f2\") " pod="openshift-image-registry/node-ca-9zmm6" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.871621 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6ecaee09-f493-4280-9dca-5c0b127c137a-hosts-file\") pod \"node-resolver-bhb7f\" (UID: \"6ecaee09-f493-4280-9dca-5c0b127c137a\") " pod="openshift-dns/node-resolver-bhb7f" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.873501 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9b090777-a023-4789-877e-55d3f30e65f2-serviceca\") pod \"node-ca-9zmm6\" (UID: \"9b090777-a023-4789-877e-55d3f30e65f2\") " pod="openshift-image-registry/node-ca-9zmm6" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.888354 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.897909 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcvcx\" (UniqueName: \"kubernetes.io/projected/9b090777-a023-4789-877e-55d3f30e65f2-kube-api-access-rcvcx\") pod \"node-ca-9zmm6\" (UID: \"9b090777-a023-4789-877e-55d3f30e65f2\") " pod="openshift-image-registry/node-ca-9zmm6" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.899642 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnwq5\" (UniqueName: \"kubernetes.io/projected/6ecaee09-f493-4280-9dca-5c0b127c137a-kube-api-access-wnwq5\") pod \"node-resolver-bhb7f\" (UID: \"6ecaee09-f493-4280-9dca-5c0b127c137a\") " pod="openshift-dns/node-resolver-bhb7f" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.924813 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.939507 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9zmm6" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.946249 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bhb7f" Feb 16 17:00:16 crc kubenswrapper[4870]: W0216 17:00:16.960508 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b090777_a023_4789_877e_55d3f30e65f2.slice/crio-4964aeec207e891fe298c438dd731ea00a3e76d4d5446e4c7d73471e9e119412 WatchSource:0}: Error finding container 4964aeec207e891fe298c438dd731ea00a3e76d4d5446e4c7d73471e9e119412: Status 404 returned error can't find the container with id 4964aeec207e891fe298c438dd731ea00a3e76d4d5446e4c7d73471e9e119412 Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.968363 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.973102 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.973177 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.973207 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.973228 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.973252 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.973309 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:00:17.973277333 +0000 UTC m=+22.456741717 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.973371 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.973389 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.973403 4870 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.973456 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:17.973438017 +0000 UTC m=+22.456902401 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.973502 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.973514 4870 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.973526 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.973539 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:17.97353326 +0000 UTC m=+22.456997644 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.973543 4870 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.973573 4870 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.973590 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:17.973580261 +0000 UTC m=+22.457044645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:16 crc kubenswrapper[4870]: E0216 17:00:16.973618 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:17.973610692 +0000 UTC m=+22.457075076 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:16 crc kubenswrapper[4870]: I0216 17:00:16.989556 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.001435 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.011163 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.027169 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.104265 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-cgzwr"] Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.104724 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.107023 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.107908 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.108361 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.108395 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.108455 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.121397 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.148132 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.161751 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.176244 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.180100 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:45:22.316338092 +0000 UTC Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.189580 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.206275 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.218580 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.224030 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:17 crc kubenswrapper[4870]: E0216 17:00:17.224171 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.224226 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:17 crc kubenswrapper[4870]: E0216 17:00:17.224263 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.230874 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.237035 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.269283 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.275963 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a3e693e8-f31b-4cc5-b521-0f37451019ab-rootfs\") pod \"machine-config-daemon-cgzwr\" (UID: \"a3e693e8-f31b-4cc5-b521-0f37451019ab\") " pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.276046 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a3e693e8-f31b-4cc5-b521-0f37451019ab-proxy-tls\") pod \"machine-config-daemon-cgzwr\" (UID: \"a3e693e8-f31b-4cc5-b521-0f37451019ab\") " pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.276102 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a3e693e8-f31b-4cc5-b521-0f37451019ab-mcd-auth-proxy-config\") pod \"machine-config-daemon-cgzwr\" (UID: \"a3e693e8-f31b-4cc5-b521-0f37451019ab\") " pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.276138 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2bmc\" (UniqueName: \"kubernetes.io/projected/a3e693e8-f31b-4cc5-b521-0f37451019ab-kube-api-access-k2bmc\") pod \"machine-config-daemon-cgzwr\" (UID: \"a3e693e8-f31b-4cc5-b521-0f37451019ab\") " pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.285824 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.309051 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.346781 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.355818 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9zmm6" event={"ID":"9b090777-a023-4789-877e-55d3f30e65f2","Type":"ContainerStarted","Data":"1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82"} Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.355871 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9zmm6" event={"ID":"9b090777-a023-4789-877e-55d3f30e65f2","Type":"ContainerStarted","Data":"4964aeec207e891fe298c438dd731ea00a3e76d4d5446e4c7d73471e9e119412"} Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.357860 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062"} Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.357922 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8"} Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.357939 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a5cfc829a0f2c08d376fefbf7ccdb64cacb3a5a71a99148da80b64d212acc494"} Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.358939 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1dc9d9b21c60330e037078313bac707b02988d3942e36b97d814aa44959873a8"} Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.360612 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804"} Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.360640 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b321e93ee0543c88cd5254589a07e2b1bda17644942ad5921a909418cc9d8e23"} Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.361929 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bhb7f" event={"ID":"6ecaee09-f493-4280-9dca-5c0b127c137a","Type":"ContainerStarted","Data":"a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1"} Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.361988 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bhb7f" event={"ID":"6ecaee09-f493-4280-9dca-5c0b127c137a","Type":"ContainerStarted","Data":"0d1680792cc487d3bd8ee1c0705d0147c72e0ccf037fed191a7b2ed372426aa2"} Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.369213 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.376929 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.377145 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a3e693e8-f31b-4cc5-b521-0f37451019ab-proxy-tls\") pod \"machine-config-daemon-cgzwr\" (UID: \"a3e693e8-f31b-4cc5-b521-0f37451019ab\") " pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.377219 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a3e693e8-f31b-4cc5-b521-0f37451019ab-mcd-auth-proxy-config\") pod \"machine-config-daemon-cgzwr\" (UID: \"a3e693e8-f31b-4cc5-b521-0f37451019ab\") " pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.377249 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2bmc\" (UniqueName: \"kubernetes.io/projected/a3e693e8-f31b-4cc5-b521-0f37451019ab-kube-api-access-k2bmc\") pod \"machine-config-daemon-cgzwr\" (UID: \"a3e693e8-f31b-4cc5-b521-0f37451019ab\") " pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.377286 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a3e693e8-f31b-4cc5-b521-0f37451019ab-rootfs\") pod \"machine-config-daemon-cgzwr\" (UID: \"a3e693e8-f31b-4cc5-b521-0f37451019ab\") " pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.377363 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a3e693e8-f31b-4cc5-b521-0f37451019ab-rootfs\") pod \"machine-config-daemon-cgzwr\" (UID: \"a3e693e8-f31b-4cc5-b521-0f37451019ab\") " pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.378142 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a3e693e8-f31b-4cc5-b521-0f37451019ab-mcd-auth-proxy-config\") pod \"machine-config-daemon-cgzwr\" (UID: \"a3e693e8-f31b-4cc5-b521-0f37451019ab\") " pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:00:17 crc kubenswrapper[4870]: E0216 17:00:17.382307 4870 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.382484 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a3e693e8-f31b-4cc5-b521-0f37451019ab-proxy-tls\") pod \"machine-config-daemon-cgzwr\" (UID: \"a3e693e8-f31b-4cc5-b521-0f37451019ab\") " pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.382672 4870 scope.go:117] "RemoveContainer" containerID="e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45" Feb 16 17:00:17 crc kubenswrapper[4870]: E0216 17:00:17.382890 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.402618 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2bmc\" (UniqueName: \"kubernetes.io/projected/a3e693e8-f31b-4cc5-b521-0f37451019ab-kube-api-access-k2bmc\") pod \"machine-config-daemon-cgzwr\" (UID: \"a3e693e8-f31b-4cc5-b521-0f37451019ab\") " pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.406109 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.420148 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:00:17 crc kubenswrapper[4870]: W0216 17:00:17.436079 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3e693e8_f31b_4cc5_b521_0f37451019ab.slice/crio-6222e279e34d5e0c39f88348a14af020ec472574201f439b03f203e3c6667a9f WatchSource:0}: Error finding container 6222e279e34d5e0c39f88348a14af020ec472574201f439b03f203e3c6667a9f: Status 404 returned error can't find the container with id 6222e279e34d5e0c39f88348a14af020ec472574201f439b03f203e3c6667a9f Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.449637 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.466397 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.481238 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.491699 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-995kl"] Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.492389 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-jjq54"] Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.492602 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.492666 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.495590 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.495898 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-drrrv"] Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.496653 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.496877 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.496884 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.496930 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.497215 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.497278 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.498477 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.500487 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.500692 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.500842 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.501134 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.501288 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.501440 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.512328 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.512893 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.521035 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.536507 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.553039 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.578728 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579311 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-cnibin\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579338 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/52f144f1-d0b6-4871-a439-6aaf51304c4b-cni-binary-copy\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579354 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-multus-cni-dir\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579371 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-host-var-lib-cni-bin\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579398 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/41366745-32be-4762-84c3-25c4b4e1732b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-995kl\" (UID: \"41366745-32be-4762-84c3-25c4b4e1732b\") " pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579414 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-etc-kubernetes\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579429 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-host-run-k8s-cni-cncf-io\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579448 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/41366745-32be-4762-84c3-25c4b4e1732b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-995kl\" (UID: \"41366745-32be-4762-84c3-25c4b4e1732b\") " pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579463 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-multus-conf-dir\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579494 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-os-release\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579509 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-host-var-lib-cni-multus\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579527 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-host-run-multus-certs\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579545 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-host-run-netns\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579563 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25h7v\" (UniqueName: \"kubernetes.io/projected/52f144f1-d0b6-4871-a439-6aaf51304c4b-kube-api-access-25h7v\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579578 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/41366745-32be-4762-84c3-25c4b4e1732b-cnibin\") pod \"multus-additional-cni-plugins-995kl\" (UID: \"41366745-32be-4762-84c3-25c4b4e1732b\") " pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579592 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-system-cni-dir\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579608 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/41366745-32be-4762-84c3-25c4b4e1732b-system-cni-dir\") pod \"multus-additional-cni-plugins-995kl\" (UID: \"41366745-32be-4762-84c3-25c4b4e1732b\") " pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579625 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-multus-socket-dir-parent\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579718 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/41366745-32be-4762-84c3-25c4b4e1732b-os-release\") pod \"multus-additional-cni-plugins-995kl\" (UID: \"41366745-32be-4762-84c3-25c4b4e1732b\") " pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579735 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-444l7\" (UniqueName: \"kubernetes.io/projected/41366745-32be-4762-84c3-25c4b4e1732b-kube-api-access-444l7\") pod \"multus-additional-cni-plugins-995kl\" (UID: \"41366745-32be-4762-84c3-25c4b4e1732b\") " pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579765 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/41366745-32be-4762-84c3-25c4b4e1732b-cni-binary-copy\") pod \"multus-additional-cni-plugins-995kl\" (UID: \"41366745-32be-4762-84c3-25c4b4e1732b\") " pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579821 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/52f144f1-d0b6-4871-a439-6aaf51304c4b-multus-daemon-config\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579894 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-host-var-lib-kubelet\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.579926 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-hostroot\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.597521 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.599458 4870 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-16 16:55:16 +0000 UTC, rotation deadline is 2026-11-11 07:38:00.746018495 +0000 UTC Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.599513 4870 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6422h37m43.14650832s for next certificate rotation Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.608811 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.621083 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.639526 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.652473 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.665596 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.679447 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681214 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/41366745-32be-4762-84c3-25c4b4e1732b-cni-binary-copy\") pod \"multus-additional-cni-plugins-995kl\" (UID: \"41366745-32be-4762-84c3-25c4b4e1732b\") " pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681258 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/52f144f1-d0b6-4871-a439-6aaf51304c4b-multus-daemon-config\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681294 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-etc-openvswitch\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681324 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-host-var-lib-kubelet\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681351 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-hostroot\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681375 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/52f144f1-d0b6-4871-a439-6aaf51304c4b-cni-binary-copy\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681398 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-cnibin\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681430 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/41366745-32be-4762-84c3-25c4b4e1732b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-995kl\" (UID: \"41366745-32be-4762-84c3-25c4b4e1732b\") " pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681455 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-multus-cni-dir\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681497 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-host-var-lib-cni-bin\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681524 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-run-netns\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681557 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-cni-bin\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681589 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-etc-kubernetes\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681616 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-host-run-k8s-cni-cncf-io\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681645 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-slash\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681672 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-run-systemd\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681703 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/41366745-32be-4762-84c3-25c4b4e1732b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-995kl\" (UID: \"41366745-32be-4762-84c3-25c4b4e1732b\") " pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681728 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-multus-conf-dir\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681754 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-log-socket\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681781 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-cni-netd\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681833 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-os-release\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681860 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-host-var-lib-cni-multus\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681888 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-var-lib-openvswitch\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681914 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-run-ovn-kubernetes\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.681995 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-host-run-multus-certs\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.682036 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-run-openvswitch\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.682100 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-host-run-netns\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.682126 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/650bce90-73d6-474d-ab19-f50252dc8bc3-ovnkube-script-lib\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.682154 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmshf\" (UniqueName: \"kubernetes.io/projected/650bce90-73d6-474d-ab19-f50252dc8bc3-kube-api-access-dmshf\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.682184 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/41366745-32be-4762-84c3-25c4b4e1732b-cnibin\") pod \"multus-additional-cni-plugins-995kl\" (UID: \"41366745-32be-4762-84c3-25c4b4e1732b\") " pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.682208 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-system-cni-dir\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.682231 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25h7v\" (UniqueName: \"kubernetes.io/projected/52f144f1-d0b6-4871-a439-6aaf51304c4b-kube-api-access-25h7v\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.682255 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-kubelet\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.682278 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/650bce90-73d6-474d-ab19-f50252dc8bc3-ovnkube-config\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.682304 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/41366745-32be-4762-84c3-25c4b4e1732b-system-cni-dir\") pod \"multus-additional-cni-plugins-995kl\" (UID: \"41366745-32be-4762-84c3-25c4b4e1732b\") " pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.682330 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-multus-socket-dir-parent\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.682355 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-run-ovn\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.682380 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-node-log\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.682402 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/650bce90-73d6-474d-ab19-f50252dc8bc3-env-overrides\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.682440 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/41366745-32be-4762-84c3-25c4b4e1732b-os-release\") pod \"multus-additional-cni-plugins-995kl\" (UID: \"41366745-32be-4762-84c3-25c4b4e1732b\") " pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.682464 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/650bce90-73d6-474d-ab19-f50252dc8bc3-ovn-node-metrics-cert\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.682491 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-444l7\" (UniqueName: \"kubernetes.io/projected/41366745-32be-4762-84c3-25c4b4e1732b-kube-api-access-444l7\") pod \"multus-additional-cni-plugins-995kl\" (UID: \"41366745-32be-4762-84c3-25c4b4e1732b\") " pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.682518 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-systemd-units\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.682547 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.683441 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/41366745-32be-4762-84c3-25c4b4e1732b-cni-binary-copy\") pod \"multus-additional-cni-plugins-995kl\" (UID: \"41366745-32be-4762-84c3-25c4b4e1732b\") " pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.684091 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/52f144f1-d0b6-4871-a439-6aaf51304c4b-multus-daemon-config\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.684168 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-host-run-multus-certs\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.684226 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-host-run-netns\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.684334 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/41366745-32be-4762-84c3-25c4b4e1732b-system-cni-dir\") pod \"multus-additional-cni-plugins-995kl\" (UID: \"41366745-32be-4762-84c3-25c4b4e1732b\") " pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.684428 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-host-var-lib-kubelet\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.684534 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-hostroot\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.684551 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-multus-socket-dir-parent\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.684974 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/41366745-32be-4762-84c3-25c4b4e1732b-os-release\") pod \"multus-additional-cni-plugins-995kl\" (UID: \"41366745-32be-4762-84c3-25c4b4e1732b\") " pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.685421 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-multus-conf-dir\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.685507 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-os-release\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.685619 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-system-cni-dir\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.685619 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-cnibin\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.685538 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-host-var-lib-cni-multus\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.685707 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-etc-kubernetes\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.685707 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-host-run-k8s-cni-cncf-io\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.685747 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/41366745-32be-4762-84c3-25c4b4e1732b-cnibin\") pod \"multus-additional-cni-plugins-995kl\" (UID: \"41366745-32be-4762-84c3-25c4b4e1732b\") " pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.685750 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-host-var-lib-cni-bin\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.685921 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/52f144f1-d0b6-4871-a439-6aaf51304c4b-cni-binary-copy\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.686215 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/52f144f1-d0b6-4871-a439-6aaf51304c4b-multus-cni-dir\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.686348 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/41366745-32be-4762-84c3-25c4b4e1732b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-995kl\" (UID: \"41366745-32be-4762-84c3-25c4b4e1732b\") " pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.686493 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/41366745-32be-4762-84c3-25c4b4e1732b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-995kl\" (UID: \"41366745-32be-4762-84c3-25c4b4e1732b\") " pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.701825 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.704158 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-444l7\" (UniqueName: \"kubernetes.io/projected/41366745-32be-4762-84c3-25c4b4e1732b-kube-api-access-444l7\") pod \"multus-additional-cni-plugins-995kl\" (UID: \"41366745-32be-4762-84c3-25c4b4e1732b\") " pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.705928 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25h7v\" (UniqueName: \"kubernetes.io/projected/52f144f1-d0b6-4871-a439-6aaf51304c4b-kube-api-access-25h7v\") pod \"multus-jjq54\" (UID: \"52f144f1-d0b6-4871-a439-6aaf51304c4b\") " pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.715884 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.758157 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:17Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.783865 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-run-openvswitch\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.783906 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/650bce90-73d6-474d-ab19-f50252dc8bc3-ovnkube-script-lib\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.783922 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmshf\" (UniqueName: \"kubernetes.io/projected/650bce90-73d6-474d-ab19-f50252dc8bc3-kube-api-access-dmshf\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.783939 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-kubelet\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.783972 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/650bce90-73d6-474d-ab19-f50252dc8bc3-ovnkube-config\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.783990 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-run-ovn\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784005 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-node-log\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784020 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/650bce90-73d6-474d-ab19-f50252dc8bc3-env-overrides\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784045 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/650bce90-73d6-474d-ab19-f50252dc8bc3-ovn-node-metrics-cert\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784063 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-systemd-units\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784079 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784096 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-etc-openvswitch\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784088 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-run-openvswitch\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784149 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-run-netns\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784122 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-run-netns\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784208 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-cni-bin\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784224 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784229 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-kubelet\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784274 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-systemd-units\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784287 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-cni-bin\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784254 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-slash\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784315 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-node-log\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784271 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-etc-openvswitch\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784231 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-slash\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784354 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-run-systemd\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784358 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-run-ovn\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784372 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-log-socket\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784389 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-cni-netd\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784403 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-run-systemd\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784422 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-var-lib-openvswitch\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784442 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-run-ovn-kubernetes\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784455 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-log-socket\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784477 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-run-ovn-kubernetes\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784506 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-var-lib-openvswitch\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784510 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-cni-netd\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.784919 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/650bce90-73d6-474d-ab19-f50252dc8bc3-ovnkube-script-lib\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.785110 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/650bce90-73d6-474d-ab19-f50252dc8bc3-env-overrides\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.785232 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/650bce90-73d6-474d-ab19-f50252dc8bc3-ovnkube-config\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.788406 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/650bce90-73d6-474d-ab19-f50252dc8bc3-ovn-node-metrics-cert\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.800472 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:17Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.816869 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-995kl" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.823571 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jjq54" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.827535 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmshf\" (UniqueName: \"kubernetes.io/projected/650bce90-73d6-474d-ab19-f50252dc8bc3-kube-api-access-dmshf\") pod \"ovnkube-node-drrrv\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.833103 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:17 crc kubenswrapper[4870]: W0216 17:00:17.840143 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52f144f1_d0b6_4871_a439_6aaf51304c4b.slice/crio-921e0899947095e7f46b8ca123321aaec454ca14fa1ed276a03f5b7a25fc1f1a WatchSource:0}: Error finding container 921e0899947095e7f46b8ca123321aaec454ca14fa1ed276a03f5b7a25fc1f1a: Status 404 returned error can't find the container with id 921e0899947095e7f46b8ca123321aaec454ca14fa1ed276a03f5b7a25fc1f1a Feb 16 17:00:17 crc kubenswrapper[4870]: W0216 17:00:17.853077 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod650bce90_73d6_474d_ab19_f50252dc8bc3.slice/crio-4d37f852f5f2aebc6de9e0be323618461948d9480fe2bb3f27d8d9c00c61ba19 WatchSource:0}: Error finding container 4d37f852f5f2aebc6de9e0be323618461948d9480fe2bb3f27d8d9c00c61ba19: Status 404 returned error can't find the container with id 4d37f852f5f2aebc6de9e0be323618461948d9480fe2bb3f27d8d9c00c61ba19 Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.861505 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:17Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.902431 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:17Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.943348 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:17Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.986018 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.986174 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.986230 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.986277 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:17 crc kubenswrapper[4870]: E0216 17:00:17.986346 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:00:19.986318504 +0000 UTC m=+24.469782888 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:00:17 crc kubenswrapper[4870]: E0216 17:00:17.986353 4870 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:17 crc kubenswrapper[4870]: E0216 17:00:17.986397 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:19.986390846 +0000 UTC m=+24.469855230 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:17 crc kubenswrapper[4870]: E0216 17:00:17.986451 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:17 crc kubenswrapper[4870]: E0216 17:00:17.986460 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:17 crc kubenswrapper[4870]: E0216 17:00:17.986494 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:17 crc kubenswrapper[4870]: E0216 17:00:17.986513 4870 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:17 crc kubenswrapper[4870]: E0216 17:00:17.986514 4870 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:17 crc kubenswrapper[4870]: E0216 17:00:17.986562 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:19.98655303 +0000 UTC m=+24.470017414 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.986450 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:17 crc kubenswrapper[4870]: E0216 17:00:17.986596 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:19.986571251 +0000 UTC m=+24.470035675 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:17 crc kubenswrapper[4870]: E0216 17:00:17.986470 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:17 crc kubenswrapper[4870]: E0216 17:00:17.986628 4870 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:17 crc kubenswrapper[4870]: E0216 17:00:17.986668 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:19.986658023 +0000 UTC m=+24.470122527 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:17 crc kubenswrapper[4870]: I0216 17:00:17.994874 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:17Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.054185 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.112495 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.181108 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:23:58.162986299 +0000 UTC Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.223039 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:18 crc kubenswrapper[4870]: E0216 17:00:18.223270 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.227552 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.228517 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.229484 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.230454 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.231281 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.232124 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.232809 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.233629 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.234529 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.235276 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.247437 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.248265 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.249188 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.249837 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.250850 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.251460 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.252310 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.253173 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.253768 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.254543 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.255422 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.255997 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.256794 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.257427 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.257860 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.258838 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.259866 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.260360 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.260936 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.261792 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.262303 4870 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.262431 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.264642 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.265195 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.265644 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.267155 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.268146 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.268673 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.269667 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.270313 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.271361 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.271971 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.273025 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.273989 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.274445 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.275338 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.275889 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.277221 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.277825 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.278333 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.279172 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.279731 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.280779 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.281299 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.365902 4870 generic.go:334] "Generic (PLEG): container finished" podID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerID="ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8" exitCode=0 Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.366006 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerDied","Data":"ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8"} Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.366074 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerStarted","Data":"4d37f852f5f2aebc6de9e0be323618461948d9480fe2bb3f27d8d9c00c61ba19"} Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.367422 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" event={"ID":"41366745-32be-4762-84c3-25c4b4e1732b","Type":"ContainerStarted","Data":"2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb"} Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.367466 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" event={"ID":"41366745-32be-4762-84c3-25c4b4e1732b","Type":"ContainerStarted","Data":"987db9f28308da7caae78efe06b7d6fb0620193250a451687acd6cc719e5699f"} Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.369276 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerStarted","Data":"75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364"} Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.369364 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerStarted","Data":"5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b"} Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.369380 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerStarted","Data":"6222e279e34d5e0c39f88348a14af020ec472574201f439b03f203e3c6667a9f"} Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.371233 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jjq54" event={"ID":"52f144f1-d0b6-4871-a439-6aaf51304c4b","Type":"ContainerStarted","Data":"2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e"} Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.371282 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jjq54" event={"ID":"52f144f1-d0b6-4871-a439-6aaf51304c4b","Type":"ContainerStarted","Data":"921e0899947095e7f46b8ca123321aaec454ca14fa1ed276a03f5b7a25fc1f1a"} Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.372220 4870 scope.go:117] "RemoveContainer" containerID="e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45" Feb 16 17:00:18 crc kubenswrapper[4870]: E0216 17:00:18.372416 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 17:00:18 crc kubenswrapper[4870]: E0216 17:00:18.380707 4870 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.390044 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.409219 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.426648 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.444203 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.455597 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.475428 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.487861 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.509476 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.530886 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.549024 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.563829 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.585806 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.601007 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.627022 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.646508 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.692624 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.722381 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.781833 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.810421 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.843119 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.877156 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.920028 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.965208 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:18 crc kubenswrapper[4870]: I0216 17:00:18.998348 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:18Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:19 crc kubenswrapper[4870]: I0216 17:00:19.053144 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:19Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:19 crc kubenswrapper[4870]: I0216 17:00:19.077130 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:19Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:19 crc kubenswrapper[4870]: I0216 17:00:19.129022 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:19Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:19 crc kubenswrapper[4870]: I0216 17:00:19.161721 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:19Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:19 crc kubenswrapper[4870]: I0216 17:00:19.182270 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 01:18:51.512087524 +0000 UTC Feb 16 17:00:19 crc kubenswrapper[4870]: I0216 17:00:19.200934 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:19Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:19 crc kubenswrapper[4870]: I0216 17:00:19.222244 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:19 crc kubenswrapper[4870]: I0216 17:00:19.222244 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:19 crc kubenswrapper[4870]: E0216 17:00:19.222388 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:19 crc kubenswrapper[4870]: E0216 17:00:19.222463 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:19 crc kubenswrapper[4870]: I0216 17:00:19.237555 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:19Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:19 crc kubenswrapper[4870]: I0216 17:00:19.356483 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:00:19 crc kubenswrapper[4870]: I0216 17:00:19.378486 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerStarted","Data":"cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e"} Feb 16 17:00:19 crc kubenswrapper[4870]: I0216 17:00:19.378552 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerStarted","Data":"29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f"} Feb 16 17:00:19 crc kubenswrapper[4870]: I0216 17:00:19.379022 4870 scope.go:117] "RemoveContainer" containerID="e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45" Feb 16 17:00:19 crc kubenswrapper[4870]: E0216 17:00:19.379217 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.011547 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.011703 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:20 crc kubenswrapper[4870]: E0216 17:00:20.011767 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:00:24.011731218 +0000 UTC m=+28.495195602 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.011822 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.011868 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:20 crc kubenswrapper[4870]: E0216 17:00:20.011891 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:20 crc kubenswrapper[4870]: E0216 17:00:20.011919 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:20 crc kubenswrapper[4870]: E0216 17:00:20.011932 4870 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.011939 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:20 crc kubenswrapper[4870]: E0216 17:00:20.012011 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:24.011992835 +0000 UTC m=+28.495457219 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:20 crc kubenswrapper[4870]: E0216 17:00:20.012107 4870 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:20 crc kubenswrapper[4870]: E0216 17:00:20.012152 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:24.012144739 +0000 UTC m=+28.495609123 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:20 crc kubenswrapper[4870]: E0216 17:00:20.012195 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:20 crc kubenswrapper[4870]: E0216 17:00:20.012244 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:20 crc kubenswrapper[4870]: E0216 17:00:20.012259 4870 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:20 crc kubenswrapper[4870]: E0216 17:00:20.012339 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:24.012313134 +0000 UTC m=+28.495777708 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:20 crc kubenswrapper[4870]: E0216 17:00:20.012195 4870 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:20 crc kubenswrapper[4870]: E0216 17:00:20.012395 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:24.012385636 +0000 UTC m=+28.495850300 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.184414 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 22:11:56.359679025 +0000 UTC Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.222236 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:20 crc kubenswrapper[4870]: E0216 17:00:20.222436 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.383538 4870 generic.go:334] "Generic (PLEG): container finished" podID="41366745-32be-4762-84c3-25c4b4e1732b" containerID="2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb" exitCode=0 Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.383635 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" event={"ID":"41366745-32be-4762-84c3-25c4b4e1732b","Type":"ContainerDied","Data":"2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb"} Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.387270 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerStarted","Data":"026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2"} Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.387325 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerStarted","Data":"1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100"} Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.389138 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1"} Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.390793 4870 scope.go:117] "RemoveContainer" containerID="e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45" Feb 16 17:00:20 crc kubenswrapper[4870]: E0216 17:00:20.391023 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.402048 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.427057 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.447390 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.460993 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.489016 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.503244 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.516669 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.531562 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.555907 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.570710 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.584287 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.597489 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.611529 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.630675 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.650510 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.662341 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.680896 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.695799 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.713378 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.726358 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.740331 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.753371 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.765333 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.779430 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.806924 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.822388 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.835788 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.847935 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.868622 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.882074 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.949921 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.955065 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.965086 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.979434 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:20 crc kubenswrapper[4870]: I0216 17:00:20.996195 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.009020 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.022466 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.038694 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.050664 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.065991 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.086664 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.098585 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.112875 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.125493 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.144242 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.159892 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.180438 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.185416 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 23:01:03.773418404 +0000 UTC Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.196882 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.214445 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.224346 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.224346 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:21 crc kubenswrapper[4870]: E0216 17:00:21.224570 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:21 crc kubenswrapper[4870]: E0216 17:00:21.224487 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.230672 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.247362 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.276917 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.290370 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.317473 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.358919 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.393747 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" event={"ID":"41366745-32be-4762-84c3-25c4b4e1732b","Type":"ContainerStarted","Data":"0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2"} Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.397753 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerStarted","Data":"a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764"} Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.397816 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerStarted","Data":"216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b"} Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.400598 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.441401 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.481280 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.518345 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.565277 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.598091 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.639094 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.689429 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.722150 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.760484 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.798414 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.843074 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.879916 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.917774 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:21 crc kubenswrapper[4870]: I0216 17:00:21.964530 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.008742 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.044574 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.080550 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.123683 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.160242 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.185791 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 00:44:51.647425744 +0000 UTC Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.200650 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.223092 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:22 crc kubenswrapper[4870]: E0216 17:00:22.223280 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.240660 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.405560 4870 generic.go:334] "Generic (PLEG): container finished" podID="41366745-32be-4762-84c3-25c4b4e1732b" containerID="0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2" exitCode=0 Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.405615 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" event={"ID":"41366745-32be-4762-84c3-25c4b4e1732b","Type":"ContainerDied","Data":"0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2"} Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.436756 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.452735 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.474069 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.493016 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.509936 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.526030 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.547208 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.562546 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.599737 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.638447 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.659513 4870 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.666141 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.666187 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.666202 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.666366 4870 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.685824 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.730498 4870 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.730917 4870 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.732615 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.732645 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.732656 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.732673 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.732685 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:22Z","lastTransitionTime":"2026-02-16T17:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:22 crc kubenswrapper[4870]: E0216 17:00:22.754151 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.759179 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.759210 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.759223 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.759241 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.759252 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:22Z","lastTransitionTime":"2026-02-16T17:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.760858 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: E0216 17:00:22.776236 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.782676 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.782732 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.782744 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.782764 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.782775 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:22Z","lastTransitionTime":"2026-02-16T17:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.799345 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: E0216 17:00:22.799556 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.804388 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.804451 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.804477 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.804518 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.804541 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:22Z","lastTransitionTime":"2026-02-16T17:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:22 crc kubenswrapper[4870]: E0216 17:00:22.822342 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.826878 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.826942 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.826986 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.827017 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.827035 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:22Z","lastTransitionTime":"2026-02-16T17:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:22 crc kubenswrapper[4870]: E0216 17:00:22.845403 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: E0216 17:00:22.845590 4870 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.847640 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.847681 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.847695 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.847717 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.847730 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:22Z","lastTransitionTime":"2026-02-16T17:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.848872 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.879240 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.950941 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.951005 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.951017 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.951040 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:22 crc kubenswrapper[4870]: I0216 17:00:22.951054 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:22Z","lastTransitionTime":"2026-02-16T17:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.054070 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.054123 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.054134 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.054154 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.054166 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:23Z","lastTransitionTime":"2026-02-16T17:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.157486 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.157528 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.157539 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.157556 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.157569 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:23Z","lastTransitionTime":"2026-02-16T17:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.186807 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 23:55:09.548245206 +0000 UTC Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.222235 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:23 crc kubenswrapper[4870]: E0216 17:00:23.222457 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.222991 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:23 crc kubenswrapper[4870]: E0216 17:00:23.223089 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.260687 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.260746 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.260757 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.260776 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.260788 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:23Z","lastTransitionTime":"2026-02-16T17:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.363327 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.363625 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.363739 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.363827 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.363898 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:23Z","lastTransitionTime":"2026-02-16T17:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.412040 4870 generic.go:334] "Generic (PLEG): container finished" podID="41366745-32be-4762-84c3-25c4b4e1732b" containerID="b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810" exitCode=0 Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.412100 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" event={"ID":"41366745-32be-4762-84c3-25c4b4e1732b","Type":"ContainerDied","Data":"b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810"} Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.433015 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.452010 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.465105 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.466735 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.466765 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.466778 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.466799 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.466813 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:23Z","lastTransitionTime":"2026-02-16T17:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.488165 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.507856 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.522915 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.541669 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.561762 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.569832 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.569925 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.569983 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.570016 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.570037 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:23Z","lastTransitionTime":"2026-02-16T17:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.578984 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.599172 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.618903 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.632862 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.644131 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.657227 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.672397 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.672449 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.672460 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.672528 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.672559 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:23Z","lastTransitionTime":"2026-02-16T17:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.679384 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.775523 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.775570 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.775584 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.775605 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.775620 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:23Z","lastTransitionTime":"2026-02-16T17:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.879038 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.879093 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.879106 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.879127 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.879142 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:23Z","lastTransitionTime":"2026-02-16T17:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.981863 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.981908 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.981918 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.981934 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:23 crc kubenswrapper[4870]: I0216 17:00:23.981961 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:23Z","lastTransitionTime":"2026-02-16T17:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.051322 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.051507 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:24 crc kubenswrapper[4870]: E0216 17:00:24.051566 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:00:32.051532801 +0000 UTC m=+36.534997185 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:00:24 crc kubenswrapper[4870]: E0216 17:00:24.051602 4870 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.051635 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:24 crc kubenswrapper[4870]: E0216 17:00:24.051679 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:32.051656535 +0000 UTC m=+36.535121119 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.051702 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.051737 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:24 crc kubenswrapper[4870]: E0216 17:00:24.051868 4870 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:24 crc kubenswrapper[4870]: E0216 17:00:24.051888 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:24 crc kubenswrapper[4870]: E0216 17:00:24.051917 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:24 crc kubenswrapper[4870]: E0216 17:00:24.051924 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:32.051916552 +0000 UTC m=+36.535380926 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:24 crc kubenswrapper[4870]: E0216 17:00:24.051933 4870 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:24 crc kubenswrapper[4870]: E0216 17:00:24.051992 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:32.051979744 +0000 UTC m=+36.535444148 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:24 crc kubenswrapper[4870]: E0216 17:00:24.052327 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:24 crc kubenswrapper[4870]: E0216 17:00:24.052425 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:24 crc kubenswrapper[4870]: E0216 17:00:24.052482 4870 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:24 crc kubenswrapper[4870]: E0216 17:00:24.052681 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:32.052612751 +0000 UTC m=+36.536077325 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.085098 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.085206 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.085233 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.085303 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.085338 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:24Z","lastTransitionTime":"2026-02-16T17:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.187068 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 20:35:00.428204383 +0000 UTC Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.188344 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.188416 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.188439 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.188471 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.188494 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:24Z","lastTransitionTime":"2026-02-16T17:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.222889 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:24 crc kubenswrapper[4870]: E0216 17:00:24.223160 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.291053 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.291132 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.291150 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.291177 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.291196 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:24Z","lastTransitionTime":"2026-02-16T17:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.394124 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.394183 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.394198 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.394222 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.394239 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:24Z","lastTransitionTime":"2026-02-16T17:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.419131 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerStarted","Data":"ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5"} Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.422647 4870 generic.go:334] "Generic (PLEG): container finished" podID="41366745-32be-4762-84c3-25c4b4e1732b" containerID="b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790" exitCode=0 Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.422733 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" event={"ID":"41366745-32be-4762-84c3-25c4b4e1732b","Type":"ContainerDied","Data":"b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790"} Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.439082 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.454959 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.469793 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.485873 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.510548 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.510588 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.510600 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.510619 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.510630 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:24Z","lastTransitionTime":"2026-02-16T17:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.552002 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.566025 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.576468 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.590240 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.608390 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.613935 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.614016 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.614033 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.614057 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.614074 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:24Z","lastTransitionTime":"2026-02-16T17:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.623730 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.637429 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.651322 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.671363 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.683836 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.699413 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.717509 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.717570 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.717581 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.717599 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.717610 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:24Z","lastTransitionTime":"2026-02-16T17:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.820260 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.820298 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.820307 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.820322 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.820334 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:24Z","lastTransitionTime":"2026-02-16T17:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.923333 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.923371 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.923380 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.923395 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:24 crc kubenswrapper[4870]: I0216 17:00:24.923405 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:24Z","lastTransitionTime":"2026-02-16T17:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.026023 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.026091 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.026111 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.026137 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.026157 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:25Z","lastTransitionTime":"2026-02-16T17:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.128982 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.129040 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.129052 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.129073 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.129087 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:25Z","lastTransitionTime":"2026-02-16T17:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.188123 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 10:31:30.722452809 +0000 UTC Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.222473 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.222545 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:25 crc kubenswrapper[4870]: E0216 17:00:25.222634 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:25 crc kubenswrapper[4870]: E0216 17:00:25.222755 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.232864 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.232930 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.232964 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.232988 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.233001 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:25Z","lastTransitionTime":"2026-02-16T17:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.335537 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.335980 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.335990 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.336010 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.336021 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:25Z","lastTransitionTime":"2026-02-16T17:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.430377 4870 generic.go:334] "Generic (PLEG): container finished" podID="41366745-32be-4762-84c3-25c4b4e1732b" containerID="1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2" exitCode=0 Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.430477 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" event={"ID":"41366745-32be-4762-84c3-25c4b4e1732b","Type":"ContainerDied","Data":"1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2"} Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.438301 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.438358 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.438367 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.438422 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.438434 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:25Z","lastTransitionTime":"2026-02-16T17:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.454466 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.469812 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.486115 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.497429 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.511374 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.528415 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.543993 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.544059 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.544070 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.544086 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.544098 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:25Z","lastTransitionTime":"2026-02-16T17:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.558094 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.572123 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.582653 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.595639 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.616057 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.634574 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.647823 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.647896 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.647909 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.647932 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.647965 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:25Z","lastTransitionTime":"2026-02-16T17:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.652999 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.668474 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.688344 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:25Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.750735 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.750817 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.750841 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.750878 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.750904 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:25Z","lastTransitionTime":"2026-02-16T17:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.854214 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.854292 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.854311 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.854349 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.854373 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:25Z","lastTransitionTime":"2026-02-16T17:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.956293 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.956348 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.956358 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.956373 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:25 crc kubenswrapper[4870]: I0216 17:00:25.956382 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:25Z","lastTransitionTime":"2026-02-16T17:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.014833 4870 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.059361 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.059405 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.059452 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.059475 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.059490 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:26Z","lastTransitionTime":"2026-02-16T17:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.163019 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.163056 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.163064 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.163080 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.163091 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:26Z","lastTransitionTime":"2026-02-16T17:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.189291 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 10:49:11.659257381 +0000 UTC Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.227561 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:26 crc kubenswrapper[4870]: E0216 17:00:26.229419 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.251475 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.265828 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.265862 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.265870 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.265886 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.265897 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:26Z","lastTransitionTime":"2026-02-16T17:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.270052 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.287126 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.300983 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.320902 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.334470 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.345880 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.368780 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.369308 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.369367 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.369380 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.369399 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.369413 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:26Z","lastTransitionTime":"2026-02-16T17:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.383138 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.424558 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.475270 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.475306 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.475315 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.475335 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.475345 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:26Z","lastTransitionTime":"2026-02-16T17:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.475402 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerStarted","Data":"c2162b9d76355171ce01efe01fdd006b7596724b3de30c839c68318ed3eb9fec"} Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.476263 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.476322 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.478124 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.483428 4870 generic.go:334] "Generic (PLEG): container finished" podID="41366745-32be-4762-84c3-25c4b4e1732b" containerID="f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3" exitCode=0 Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.483539 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" event={"ID":"41366745-32be-4762-84c3-25c4b4e1732b","Type":"ContainerDied","Data":"f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3"} Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.501900 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.503007 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.519666 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.533327 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.534417 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.535915 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.548507 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.561134 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.575410 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.578686 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.578735 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.578748 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.578766 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.578778 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:26Z","lastTransitionTime":"2026-02-16T17:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.598619 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.612760 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.630972 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.644593 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.665280 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2162b9d76355171ce01efe01fdd006b7596724b3de30c839c68318ed3eb9fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.678921 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.680978 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.681011 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.681050 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.681285 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.681308 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:26Z","lastTransitionTime":"2026-02-16T17:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.693910 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.709632 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.723655 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.744810 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.758719 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.773766 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.784736 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.784767 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.784779 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.784798 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.784812 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:26Z","lastTransitionTime":"2026-02-16T17:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.796109 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:26Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.888143 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.888229 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.888242 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.888265 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.888280 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:26Z","lastTransitionTime":"2026-02-16T17:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.990520 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.990566 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.990581 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.990601 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:26 crc kubenswrapper[4870]: I0216 17:00:26.990614 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:26Z","lastTransitionTime":"2026-02-16T17:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.093464 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.093538 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.093556 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.093586 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.093607 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:27Z","lastTransitionTime":"2026-02-16T17:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.189881 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:02:36.68418828 +0000 UTC Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.195909 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.195968 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.195987 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.196005 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.196017 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:27Z","lastTransitionTime":"2026-02-16T17:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.222401 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:27 crc kubenswrapper[4870]: E0216 17:00:27.222585 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.222895 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:27 crc kubenswrapper[4870]: E0216 17:00:27.223157 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.297768 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.297811 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.297825 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.297845 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.297857 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:27Z","lastTransitionTime":"2026-02-16T17:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.400712 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.400781 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.400809 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.400849 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.400870 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:27Z","lastTransitionTime":"2026-02-16T17:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.493199 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" event={"ID":"41366745-32be-4762-84c3-25c4b4e1732b","Type":"ContainerStarted","Data":"af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188"} Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.504165 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.504364 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.504496 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.504641 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.504779 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:27Z","lastTransitionTime":"2026-02-16T17:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.511836 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:27Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.530874 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:27Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.548652 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:27Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.566556 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:27Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.585145 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:27Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.597064 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:27Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.607979 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.608050 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.608072 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.608104 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.608126 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:27Z","lastTransitionTime":"2026-02-16T17:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.614962 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:27Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.640493 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:27Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.660689 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:27Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.682092 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:27Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.696538 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:27Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.711623 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.711679 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.711691 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.711708 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.711749 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:27Z","lastTransitionTime":"2026-02-16T17:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.719107 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2162b9d76355171ce01efe01fdd006b7596724b3de30c839c68318ed3eb9fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:27Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.740076 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:27Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.755076 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:27Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.770585 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:27Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.815067 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.815151 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.815168 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.815207 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.815244 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:27Z","lastTransitionTime":"2026-02-16T17:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.918017 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.918087 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.918104 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.918130 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:27 crc kubenswrapper[4870]: I0216 17:00:27.918148 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:27Z","lastTransitionTime":"2026-02-16T17:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.020793 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.020834 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.020845 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.020862 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.020873 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:28Z","lastTransitionTime":"2026-02-16T17:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.124291 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.124336 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.124345 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.124363 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.124375 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:28Z","lastTransitionTime":"2026-02-16T17:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.191202 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 11:49:10.576966534 +0000 UTC Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.222865 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:28 crc kubenswrapper[4870]: E0216 17:00:28.223062 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.226444 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.226482 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.226492 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.226508 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.226526 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:28Z","lastTransitionTime":"2026-02-16T17:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.329680 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.329736 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.329746 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.329766 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.329778 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:28Z","lastTransitionTime":"2026-02-16T17:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.432056 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.432109 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.432120 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.432138 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.432149 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:28Z","lastTransitionTime":"2026-02-16T17:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.535922 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.535995 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.536007 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.536028 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.536046 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:28Z","lastTransitionTime":"2026-02-16T17:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.639378 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.639433 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.639443 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.639460 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.639472 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:28Z","lastTransitionTime":"2026-02-16T17:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.742904 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.742987 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.742999 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.743021 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.743034 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:28Z","lastTransitionTime":"2026-02-16T17:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.845311 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.845371 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.845384 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.845402 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.845412 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:28Z","lastTransitionTime":"2026-02-16T17:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.948566 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.948637 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.948649 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.948668 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:28 crc kubenswrapper[4870]: I0216 17:00:28.948679 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:28Z","lastTransitionTime":"2026-02-16T17:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.051835 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.052014 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.052032 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.052055 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.052069 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:29Z","lastTransitionTime":"2026-02-16T17:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.170428 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.170498 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.170509 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.170533 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.170547 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:29Z","lastTransitionTime":"2026-02-16T17:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.192199 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 11:05:20.181539329 +0000 UTC Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.222321 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.222442 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:29 crc kubenswrapper[4870]: E0216 17:00:29.222493 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:29 crc kubenswrapper[4870]: E0216 17:00:29.222709 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.273779 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.273846 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.273863 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.273889 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.273908 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:29Z","lastTransitionTime":"2026-02-16T17:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.377528 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.377588 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.377600 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.377620 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.377638 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:29Z","lastTransitionTime":"2026-02-16T17:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.480935 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.481027 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.481039 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.481060 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.481676 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:29Z","lastTransitionTime":"2026-02-16T17:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.584035 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.584121 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.584161 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.584199 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.584224 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:29Z","lastTransitionTime":"2026-02-16T17:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.687987 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.688063 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.688086 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.688121 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.688144 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:29Z","lastTransitionTime":"2026-02-16T17:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.790837 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.790911 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.790929 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.790977 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.790998 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:29Z","lastTransitionTime":"2026-02-16T17:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.893833 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.893900 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.893916 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.893966 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.893982 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:29Z","lastTransitionTime":"2026-02-16T17:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.996676 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.996739 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.996750 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.996772 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:29 crc kubenswrapper[4870]: I0216 17:00:29.996784 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:29Z","lastTransitionTime":"2026-02-16T17:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.105655 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.105749 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.105778 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.105814 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.105838 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:30Z","lastTransitionTime":"2026-02-16T17:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.193161 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 08:18:10.490753741 +0000 UTC Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.209612 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.209679 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.209691 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.209713 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.209726 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:30Z","lastTransitionTime":"2026-02-16T17:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.222012 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:30 crc kubenswrapper[4870]: E0216 17:00:30.222200 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.313553 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.313606 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.313619 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.313638 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.313705 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:30Z","lastTransitionTime":"2026-02-16T17:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.416751 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.416835 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.416852 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.416890 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.416909 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:30Z","lastTransitionTime":"2026-02-16T17:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.507477 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94"] Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.508054 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.512407 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.512609 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.519860 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.519893 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.519905 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.519925 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.519956 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:30Z","lastTransitionTime":"2026-02-16T17:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.607575 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kv64\" (UniqueName: \"kubernetes.io/projected/28571c8d-03d1-4c81-9d6d-23328c859237-kube-api-access-9kv64\") pod \"ovnkube-control-plane-749d76644c-snc94\" (UID: \"28571c8d-03d1-4c81-9d6d-23328c859237\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.607732 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/28571c8d-03d1-4c81-9d6d-23328c859237-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-snc94\" (UID: \"28571c8d-03d1-4c81-9d6d-23328c859237\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.607833 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/28571c8d-03d1-4c81-9d6d-23328c859237-env-overrides\") pod \"ovnkube-control-plane-749d76644c-snc94\" (UID: \"28571c8d-03d1-4c81-9d6d-23328c859237\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.607916 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/28571c8d-03d1-4c81-9d6d-23328c859237-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-snc94\" (UID: \"28571c8d-03d1-4c81-9d6d-23328c859237\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.615141 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:30Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.623436 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.623666 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.623679 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.623699 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.623711 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:30Z","lastTransitionTime":"2026-02-16T17:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.635104 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:30Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.656183 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:30Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.670223 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:30Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.682643 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:30Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.699518 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:30Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.708922 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/28571c8d-03d1-4c81-9d6d-23328c859237-env-overrides\") pod \"ovnkube-control-plane-749d76644c-snc94\" (UID: \"28571c8d-03d1-4c81-9d6d-23328c859237\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.709053 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/28571c8d-03d1-4c81-9d6d-23328c859237-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-snc94\" (UID: \"28571c8d-03d1-4c81-9d6d-23328c859237\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.710076 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/28571c8d-03d1-4c81-9d6d-23328c859237-env-overrides\") pod \"ovnkube-control-plane-749d76644c-snc94\" (UID: \"28571c8d-03d1-4c81-9d6d-23328c859237\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.710256 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kv64\" (UniqueName: \"kubernetes.io/projected/28571c8d-03d1-4c81-9d6d-23328c859237-kube-api-access-9kv64\") pod \"ovnkube-control-plane-749d76644c-snc94\" (UID: \"28571c8d-03d1-4c81-9d6d-23328c859237\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.710292 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/28571c8d-03d1-4c81-9d6d-23328c859237-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-snc94\" (UID: \"28571c8d-03d1-4c81-9d6d-23328c859237\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.710909 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/28571c8d-03d1-4c81-9d6d-23328c859237-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-snc94\" (UID: \"28571c8d-03d1-4c81-9d6d-23328c859237\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.714759 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:30Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.718162 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/28571c8d-03d1-4c81-9d6d-23328c859237-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-snc94\" (UID: \"28571c8d-03d1-4c81-9d6d-23328c859237\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.727009 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.727370 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.727383 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.727403 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.727413 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:30Z","lastTransitionTime":"2026-02-16T17:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.731567 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:30Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.734515 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kv64\" (UniqueName: \"kubernetes.io/projected/28571c8d-03d1-4c81-9d6d-23328c859237-kube-api-access-9kv64\") pod \"ovnkube-control-plane-749d76644c-snc94\" (UID: \"28571c8d-03d1-4c81-9d6d-23328c859237\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.744087 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:30Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.764540 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2162b9d76355171ce01efe01fdd006b7596724b3de30c839c68318ed3eb9fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:30Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.779941 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:30Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.794150 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:30Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.807863 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:30Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.821651 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:30Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.830016 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.830079 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.830091 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.830109 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.830121 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:30Z","lastTransitionTime":"2026-02-16T17:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.834491 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:30Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.850730 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:30Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.922288 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.934365 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.934422 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.934433 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.934451 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:30 crc kubenswrapper[4870]: I0216 17:00:30.934463 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:30Z","lastTransitionTime":"2026-02-16T17:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.037999 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.038073 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.038086 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.038109 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.038123 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:31Z","lastTransitionTime":"2026-02-16T17:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.141159 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.141214 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.141224 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.141242 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.141252 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:31Z","lastTransitionTime":"2026-02-16T17:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.194384 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 18:45:28.928721166 +0000 UTC Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.221882 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.222131 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:31 crc kubenswrapper[4870]: E0216 17:00:31.222340 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:31 crc kubenswrapper[4870]: E0216 17:00:31.222521 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.244077 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.244135 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.244150 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.244178 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.244196 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:31Z","lastTransitionTime":"2026-02-16T17:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.347648 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.347703 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.347717 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.347737 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.347752 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:31Z","lastTransitionTime":"2026-02-16T17:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.451170 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.451228 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.451236 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.451256 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.451282 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:31Z","lastTransitionTime":"2026-02-16T17:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.512144 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" event={"ID":"28571c8d-03d1-4c81-9d6d-23328c859237","Type":"ContainerStarted","Data":"99cf6abd32755d5f425791ef831241917e3c00c3deec55aa0938228a2c1391c1"} Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.512222 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" event={"ID":"28571c8d-03d1-4c81-9d6d-23328c859237","Type":"ContainerStarted","Data":"53fd4a95cfb916e4e4dc88ab004efff6059c490175935b5775ce8c329df1c7fb"} Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.514904 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovnkube-controller/0.log" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.519079 4870 generic.go:334] "Generic (PLEG): container finished" podID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerID="c2162b9d76355171ce01efe01fdd006b7596724b3de30c839c68318ed3eb9fec" exitCode=1 Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.519123 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerDied","Data":"c2162b9d76355171ce01efe01fdd006b7596724b3de30c839c68318ed3eb9fec"} Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.520458 4870 scope.go:117] "RemoveContainer" containerID="c2162b9d76355171ce01efe01fdd006b7596724b3de30c839c68318ed3eb9fec" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.548018 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.553579 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.553633 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.553643 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.553660 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.553671 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:31Z","lastTransitionTime":"2026-02-16T17:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.565019 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.584616 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.603190 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.618555 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.633425 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.643044 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.656016 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.656058 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.656070 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.656085 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.656096 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:31Z","lastTransitionTime":"2026-02-16T17:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.673807 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.677249 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-zsfxc"] Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.677739 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:31 crc kubenswrapper[4870]: E0216 17:00:31.677816 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.691474 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.704767 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.724286 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.724456 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftth5\" (UniqueName: \"kubernetes.io/projected/d13b0b83-258a-4545-b358-e08252dbbe87-kube-api-access-ftth5\") pod \"network-metrics-daemon-zsfxc\" (UID: \"d13b0b83-258a-4545-b358-e08252dbbe87\") " pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.724526 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs\") pod \"network-metrics-daemon-zsfxc\" (UID: \"d13b0b83-258a-4545-b358-e08252dbbe87\") " pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.745784 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2162b9d76355171ce01efe01fdd006b7596724b3de30c839c68318ed3eb9fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2162b9d76355171ce01efe01fdd006b7596724b3de30c839c68318ed3eb9fec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"h factory\\\\nI0216 17:00:30.923074 6158 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0216 17:00:30.923149 6158 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 17:00:30.923160 6158 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 17:00:30.923169 6158 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 17:00:30.923181 6158 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 17:00:30.923187 6158 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 17:00:30.923197 6158 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 17:00:30.923221 6158 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:30.923238 6158 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:30.923330 6158 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0216 17:00:30.923343 6158 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.759134 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.759184 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.759195 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.759222 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.759237 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:31Z","lastTransitionTime":"2026-02-16T17:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.763833 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.779372 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.795820 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.810996 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.825292 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftth5\" (UniqueName: \"kubernetes.io/projected/d13b0b83-258a-4545-b358-e08252dbbe87-kube-api-access-ftth5\") pod \"network-metrics-daemon-zsfxc\" (UID: \"d13b0b83-258a-4545-b358-e08252dbbe87\") " pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.825404 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs\") pod \"network-metrics-daemon-zsfxc\" (UID: \"d13b0b83-258a-4545-b358-e08252dbbe87\") " pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:31 crc kubenswrapper[4870]: E0216 17:00:31.825585 4870 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:31 crc kubenswrapper[4870]: E0216 17:00:31.825713 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs podName:d13b0b83-258a-4545-b358-e08252dbbe87 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:32.325639002 +0000 UTC m=+36.809103406 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs") pod "network-metrics-daemon-zsfxc" (UID: "d13b0b83-258a-4545-b358-e08252dbbe87") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.838603 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.844763 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftth5\" (UniqueName: \"kubernetes.io/projected/d13b0b83-258a-4545-b358-e08252dbbe87-kube-api-access-ftth5\") pod \"network-metrics-daemon-zsfxc\" (UID: \"d13b0b83-258a-4545-b358-e08252dbbe87\") " pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.854041 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.864360 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.864398 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.864414 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.864432 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.864447 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:31Z","lastTransitionTime":"2026-02-16T17:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.869439 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.892238 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.908675 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.935914 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.953383 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.966994 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.967058 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.967072 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.967089 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.967102 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:31Z","lastTransitionTime":"2026-02-16T17:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.969021 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:31 crc kubenswrapper[4870]: I0216 17:00:31.980984 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.001938 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2162b9d76355171ce01efe01fdd006b7596724b3de30c839c68318ed3eb9fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2162b9d76355171ce01efe01fdd006b7596724b3de30c839c68318ed3eb9fec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"h factory\\\\nI0216 17:00:30.923074 6158 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0216 17:00:30.923149 6158 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 17:00:30.923160 6158 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 17:00:30.923169 6158 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 17:00:30.923181 6158 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 17:00:30.923187 6158 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 17:00:30.923197 6158 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 17:00:30.923221 6158 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:30.923238 6158 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:30.923330 6158 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0216 17:00:30.923343 6158 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:31Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.016667 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.029769 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.042541 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13b0b83-258a-4545-b358-e08252dbbe87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zsfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.061627 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.069815 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.069855 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.069876 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.069907 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.069921 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.083176 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.099650 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.115001 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.127448 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.127596 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:32 crc kubenswrapper[4870]: E0216 17:00:32.127675 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:00:48.127654663 +0000 UTC m=+52.611119047 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.127706 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.127728 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.127760 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:32 crc kubenswrapper[4870]: E0216 17:00:32.127766 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:32 crc kubenswrapper[4870]: E0216 17:00:32.127806 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:32 crc kubenswrapper[4870]: E0216 17:00:32.127822 4870 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:32 crc kubenswrapper[4870]: E0216 17:00:32.127875 4870 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:32 crc kubenswrapper[4870]: E0216 17:00:32.127904 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:48.127876839 +0000 UTC m=+52.611341413 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:32 crc kubenswrapper[4870]: E0216 17:00:32.127929 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:48.12791929 +0000 UTC m=+52.611383884 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:32 crc kubenswrapper[4870]: E0216 17:00:32.127938 4870 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:32 crc kubenswrapper[4870]: E0216 17:00:32.128000 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:32 crc kubenswrapper[4870]: E0216 17:00:32.128030 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:32 crc kubenswrapper[4870]: E0216 17:00:32.128050 4870 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:32 crc kubenswrapper[4870]: E0216 17:00:32.128056 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:48.128036263 +0000 UTC m=+52.611500647 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:32 crc kubenswrapper[4870]: E0216 17:00:32.128101 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:48.128083924 +0000 UTC m=+52.611548488 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.173055 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.173103 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.173112 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.173135 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.173149 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.195496 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 18:32:10.438958349 +0000 UTC Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.223065 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:32 crc kubenswrapper[4870]: E0216 17:00:32.223272 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.276166 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.276204 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.276213 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.276228 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.276239 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.329324 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs\") pod \"network-metrics-daemon-zsfxc\" (UID: \"d13b0b83-258a-4545-b358-e08252dbbe87\") " pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:32 crc kubenswrapper[4870]: E0216 17:00:32.329567 4870 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:32 crc kubenswrapper[4870]: E0216 17:00:32.329697 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs podName:d13b0b83-258a-4545-b358-e08252dbbe87 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:33.329672275 +0000 UTC m=+37.813136729 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs") pod "network-metrics-daemon-zsfxc" (UID: "d13b0b83-258a-4545-b358-e08252dbbe87") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.379403 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.379452 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.379464 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.379484 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.379496 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.483203 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.483252 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.483261 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.483279 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.483290 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.524712 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovnkube-controller/0.log" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.527640 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerStarted","Data":"484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e"} Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.528178 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.530229 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" event={"ID":"28571c8d-03d1-4c81-9d6d-23328c859237","Type":"ContainerStarted","Data":"368d8655cd750405c9ca8c279f2bdb55e6d700aa73caf439a5cbee5fd5e0f3ae"} Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.550827 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.566404 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.579875 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.585876 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.585924 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.585933 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.585969 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.585983 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.598979 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.617038 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.630728 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.643713 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.657184 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.668219 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.687894 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2162b9d76355171ce01efe01fdd006b7596724b3de30c839c68318ed3eb9fec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"h factory\\\\nI0216 17:00:30.923074 6158 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0216 17:00:30.923149 6158 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 17:00:30.923160 6158 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 17:00:30.923169 6158 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 17:00:30.923181 6158 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 17:00:30.923187 6158 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 17:00:30.923197 6158 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 17:00:30.923221 6158 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:30.923238 6158 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:30.923330 6158 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0216 17:00:30.923343 6158 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.689667 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.689751 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.689791 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.689815 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.689828 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.705006 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.718765 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.732048 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13b0b83-258a-4545-b358-e08252dbbe87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zsfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.745752 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.760987 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.774687 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.790450 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.791721 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.791756 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.791768 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.791786 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.791796 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.804121 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.853779 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.874171 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13b0b83-258a-4545-b358-e08252dbbe87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zsfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.894231 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.894277 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.894291 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.894315 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.894330 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.895113 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.909678 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.922028 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.935627 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.958477 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.972113 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.983292 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.996654 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.996721 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.996730 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.996749 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.996760 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:32Z","lastTransitionTime":"2026-02-16T17:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:32 crc kubenswrapper[4870]: I0216 17:00:32.997477 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.008778 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99cf6abd32755d5f425791ef831241917e3c00c3deec55aa0938228a2c1391c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368d8655cd750405c9ca8c279f2bdb55e6d700aa73caf439a5cbee5fd5e0f3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.020884 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.034005 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.046840 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.057177 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.075705 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2162b9d76355171ce01efe01fdd006b7596724b3de30c839c68318ed3eb9fec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"h factory\\\\nI0216 17:00:30.923074 6158 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0216 17:00:30.923149 6158 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 17:00:30.923160 6158 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 17:00:30.923169 6158 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 17:00:30.923181 6158 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 17:00:30.923187 6158 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 17:00:30.923197 6158 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 17:00:30.923221 6158 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:30.923238 6158 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:30.923330 6158 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0216 17:00:30.923343 6158 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.100465 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.100549 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.100564 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.100585 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.100599 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.109781 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.109822 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.109830 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.109850 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.109868 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4870]: E0216 17:00:33.124372 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.129358 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.129408 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.129439 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.129458 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.129469 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4870]: E0216 17:00:33.148593 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.158494 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.158558 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.158575 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.158601 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.158624 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4870]: E0216 17:00:33.176176 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.181242 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.181292 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.181308 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.181331 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.181346 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.196474 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 13:28:11.298577288 +0000 UTC Feb 16 17:00:33 crc kubenswrapper[4870]: E0216 17:00:33.197389 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.201893 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.201940 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.201968 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.201989 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.202005 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4870]: E0216 17:00:33.217323 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: E0216 17:00:33.217464 4870 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.220019 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.220058 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.220069 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.220090 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.220106 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.222209 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:33 crc kubenswrapper[4870]: E0216 17:00:33.222347 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.223117 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:33 crc kubenswrapper[4870]: E0216 17:00:33.223260 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.223286 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.223394 4870 scope.go:117] "RemoveContainer" containerID="e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45" Feb 16 17:00:33 crc kubenswrapper[4870]: E0216 17:00:33.223443 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.323609 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.323666 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.323680 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.323699 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.323712 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.337636 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs\") pod \"network-metrics-daemon-zsfxc\" (UID: \"d13b0b83-258a-4545-b358-e08252dbbe87\") " pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:33 crc kubenswrapper[4870]: E0216 17:00:33.337853 4870 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:33 crc kubenswrapper[4870]: E0216 17:00:33.337980 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs podName:d13b0b83-258a-4545-b358-e08252dbbe87 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:35.337955195 +0000 UTC m=+39.821419579 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs") pod "network-metrics-daemon-zsfxc" (UID: "d13b0b83-258a-4545-b358-e08252dbbe87") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.425968 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.426053 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.426068 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.426113 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.426128 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.528896 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.528935 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.528957 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.528971 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.528981 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.536581 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.539658 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35"} Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.540600 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.544047 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovnkube-controller/1.log" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.544928 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovnkube-controller/0.log" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.551604 4870 generic.go:334] "Generic (PLEG): container finished" podID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerID="484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e" exitCode=1 Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.551696 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerDied","Data":"484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e"} Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.551851 4870 scope.go:117] "RemoveContainer" containerID="c2162b9d76355171ce01efe01fdd006b7596724b3de30c839c68318ed3eb9fec" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.552414 4870 scope.go:117] "RemoveContainer" containerID="484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e" Feb 16 17:00:33 crc kubenswrapper[4870]: E0216 17:00:33.552555 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-drrrv_openshift-ovn-kubernetes(650bce90-73d6-474d-ab19-f50252dc8bc3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.575186 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.596393 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.611512 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.623343 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.631975 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.632056 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.632081 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.632115 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.632133 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.653568 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2162b9d76355171ce01efe01fdd006b7596724b3de30c839c68318ed3eb9fec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"h factory\\\\nI0216 17:00:30.923074 6158 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0216 17:00:30.923149 6158 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 17:00:30.923160 6158 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 17:00:30.923169 6158 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 17:00:30.923181 6158 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 17:00:30.923187 6158 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 17:00:30.923197 6158 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 17:00:30.923221 6158 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:30.923238 6158 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:30.923330 6158 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0216 17:00:30.923343 6158 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.669412 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.680642 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.724181 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13b0b83-258a-4545-b358-e08252dbbe87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zsfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.735057 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.735122 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.735142 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.735175 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.735197 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.738804 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.757147 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.777102 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.794678 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.820457 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.832713 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.843368 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.843416 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.843428 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.843444 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.843456 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.851054 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.866963 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.884433 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99cf6abd32755d5f425791ef831241917e3c00c3deec55aa0938228a2c1391c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368d8655cd750405c9ca8c279f2bdb55e6d700aa73caf439a5cbee5fd5e0f3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.903089 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.917185 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.931902 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13b0b83-258a-4545-b358-e08252dbbe87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zsfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.946041 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.946104 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.946124 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.946153 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.946172 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:33Z","lastTransitionTime":"2026-02-16T17:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.952384 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.970725 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:33 crc kubenswrapper[4870]: I0216 17:00:33.987760 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:33Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.002502 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.032732 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.049208 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.049268 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.049284 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.049307 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.049324 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:34Z","lastTransitionTime":"2026-02-16T17:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.056982 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.070816 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.087290 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.103047 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99cf6abd32755d5f425791ef831241917e3c00c3deec55aa0938228a2c1391c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368d8655cd750405c9ca8c279f2bdb55e6d700aa73caf439a5cbee5fd5e0f3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.119624 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.138813 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.152189 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.152248 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.152265 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.152293 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.152311 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:34Z","lastTransitionTime":"2026-02-16T17:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.155905 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.167374 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.189798 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2162b9d76355171ce01efe01fdd006b7596724b3de30c839c68318ed3eb9fec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"h factory\\\\nI0216 17:00:30.923074 6158 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0216 17:00:30.923149 6158 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 17:00:30.923160 6158 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 17:00:30.923169 6158 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 17:00:30.923181 6158 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 17:00:30.923187 6158 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 17:00:30.923197 6158 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 17:00:30.923221 6158 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:30.923238 6158 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:00:30.923330 6158 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0216 17:00:30.923343 6158 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"pod openshift-image-registry/node-ca-9zmm6\\\\nI0216 17:00:32.762353 6340 services_controller.go:444] Built service openshift-marketplace/community-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0216 17:00:32.760007 6340 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z]\\\\nI0216 17:00:32.762360 6340 services_controller.go:445] Built service openshift-marketplace/community-operators LB template configs for network=default: []services.lbConfig(nil)\\\\nI0216 17:00:32.762371 6340 services_controller.go:451] Built service openshift-marketplace/community-operator\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.197487 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 03:14:23.230390765 +0000 UTC Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.223015 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:34 crc kubenswrapper[4870]: E0216 17:00:34.223178 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.255426 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.255464 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.255474 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.255488 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.255498 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:34Z","lastTransitionTime":"2026-02-16T17:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.358685 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.358807 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.358937 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.358988 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.359009 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:34Z","lastTransitionTime":"2026-02-16T17:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.461622 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.461694 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.461709 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.461728 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.461741 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:34Z","lastTransitionTime":"2026-02-16T17:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.557604 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovnkube-controller/1.log" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.561509 4870 scope.go:117] "RemoveContainer" containerID="484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e" Feb 16 17:00:34 crc kubenswrapper[4870]: E0216 17:00:34.561701 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-drrrv_openshift-ovn-kubernetes(650bce90-73d6-474d-ab19-f50252dc8bc3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.563453 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.563482 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.563492 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.563507 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.563517 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:34Z","lastTransitionTime":"2026-02-16T17:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.584040 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"pod openshift-image-registry/node-ca-9zmm6\\\\nI0216 17:00:32.762353 6340 services_controller.go:444] Built service openshift-marketplace/community-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0216 17:00:32.760007 6340 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z]\\\\nI0216 17:00:32.762360 6340 services_controller.go:445] Built service openshift-marketplace/community-operators LB template configs for network=default: []services.lbConfig(nil)\\\\nI0216 17:00:32.762371 6340 services_controller.go:451] Built service openshift-marketplace/community-operator\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-drrrv_openshift-ovn-kubernetes(650bce90-73d6-474d-ab19-f50252dc8bc3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.604629 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.624006 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.649645 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.663121 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.666094 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.666125 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.666135 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.666150 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.666160 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:34Z","lastTransitionTime":"2026-02-16T17:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.676856 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.691020 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.706252 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13b0b83-258a-4545-b358-e08252dbbe87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zsfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.723794 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.741409 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.757061 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.768602 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.768652 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.768663 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.768681 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.768694 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:34Z","lastTransitionTime":"2026-02-16T17:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.770657 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.782643 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99cf6abd32755d5f425791ef831241917e3c00c3deec55aa0938228a2c1391c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368d8655cd750405c9ca8c279f2bdb55e6d700aa73caf439a5cbee5fd5e0f3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.803393 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.819961 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.832187 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.847605 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:34Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.871066 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.871109 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.871121 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.871138 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.871151 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:34Z","lastTransitionTime":"2026-02-16T17:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.973870 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.973919 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.973931 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.973973 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:34 crc kubenswrapper[4870]: I0216 17:00:34.973987 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:34Z","lastTransitionTime":"2026-02-16T17:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.077381 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.077458 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.077478 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.077508 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.077529 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:35Z","lastTransitionTime":"2026-02-16T17:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.181038 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.181086 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.181098 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.181116 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.181132 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:35Z","lastTransitionTime":"2026-02-16T17:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.198413 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 03:29:57.117750897 +0000 UTC Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.222899 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.223039 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.222910 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:35 crc kubenswrapper[4870]: E0216 17:00:35.223208 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:35 crc kubenswrapper[4870]: E0216 17:00:35.223325 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:35 crc kubenswrapper[4870]: E0216 17:00:35.223463 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.284251 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.284331 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.284348 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.284375 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.284397 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:35Z","lastTransitionTime":"2026-02-16T17:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.361474 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs\") pod \"network-metrics-daemon-zsfxc\" (UID: \"d13b0b83-258a-4545-b358-e08252dbbe87\") " pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:35 crc kubenswrapper[4870]: E0216 17:00:35.361740 4870 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:35 crc kubenswrapper[4870]: E0216 17:00:35.361854 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs podName:d13b0b83-258a-4545-b358-e08252dbbe87 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:39.361824385 +0000 UTC m=+43.845288799 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs") pod "network-metrics-daemon-zsfxc" (UID: "d13b0b83-258a-4545-b358-e08252dbbe87") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.386415 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.386462 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.386475 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.386491 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.386505 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:35Z","lastTransitionTime":"2026-02-16T17:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.489355 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.489396 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.489405 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.489420 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.489430 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:35Z","lastTransitionTime":"2026-02-16T17:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.591650 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.591709 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.591720 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.591734 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.591744 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:35Z","lastTransitionTime":"2026-02-16T17:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.694619 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.694709 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.694725 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.694750 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.694766 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:35Z","lastTransitionTime":"2026-02-16T17:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.797361 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.797398 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.797406 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.797421 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.797433 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:35Z","lastTransitionTime":"2026-02-16T17:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.900364 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.900410 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.900420 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.900436 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:35 crc kubenswrapper[4870]: I0216 17:00:35.900447 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:35Z","lastTransitionTime":"2026-02-16T17:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.004470 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.004546 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.004559 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.004582 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.004600 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:36Z","lastTransitionTime":"2026-02-16T17:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.107107 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.107178 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.107189 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.107215 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.107227 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:36Z","lastTransitionTime":"2026-02-16T17:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.199571 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 19:21:53.874075502 +0000 UTC Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.209054 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.209114 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.209123 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.209137 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.209150 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:36Z","lastTransitionTime":"2026-02-16T17:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.222017 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:36 crc kubenswrapper[4870]: E0216 17:00:36.222164 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.241402 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:36Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.256530 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:36Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.269881 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:36Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.285751 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:36Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.297736 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:36Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.312254 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.312522 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.312605 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.312711 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.312782 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:36Z","lastTransitionTime":"2026-02-16T17:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.318175 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:36Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.368714 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99cf6abd32755d5f425791ef831241917e3c00c3deec55aa0938228a2c1391c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368d8655cd750405c9ca8c279f2bdb55e6d700aa73caf439a5cbee5fd5e0f3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:36Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.399908 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:36Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.414919 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:36Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.416164 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.416292 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.416361 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.416420 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.416440 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:36Z","lastTransitionTime":"2026-02-16T17:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.430888 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:36Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.443347 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:36Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.464639 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"pod openshift-image-registry/node-ca-9zmm6\\\\nI0216 17:00:32.762353 6340 services_controller.go:444] Built service openshift-marketplace/community-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0216 17:00:32.760007 6340 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z]\\\\nI0216 17:00:32.762360 6340 services_controller.go:445] Built service openshift-marketplace/community-operators LB template configs for network=default: []services.lbConfig(nil)\\\\nI0216 17:00:32.762371 6340 services_controller.go:451] Built service openshift-marketplace/community-operator\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-drrrv_openshift-ovn-kubernetes(650bce90-73d6-474d-ab19-f50252dc8bc3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:36Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.483432 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:36Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.498787 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:36Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.512016 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13b0b83-258a-4545-b358-e08252dbbe87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zsfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:36Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.520374 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.520417 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.520430 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.520450 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.520465 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:36Z","lastTransitionTime":"2026-02-16T17:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.529352 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:36Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.547115 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:36Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.623986 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.624038 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.624056 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.624083 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.624103 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:36Z","lastTransitionTime":"2026-02-16T17:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.727664 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.727709 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.727717 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.727731 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.727741 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:36Z","lastTransitionTime":"2026-02-16T17:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.831470 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.831600 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.831628 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.831665 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.831699 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:36Z","lastTransitionTime":"2026-02-16T17:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.935709 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.936230 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.936486 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.936719 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:36 crc kubenswrapper[4870]: I0216 17:00:36.936994 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:36Z","lastTransitionTime":"2026-02-16T17:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.040725 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.041166 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.041240 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.041351 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.041437 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:37Z","lastTransitionTime":"2026-02-16T17:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.144359 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.144406 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.144421 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.144438 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.144450 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:37Z","lastTransitionTime":"2026-02-16T17:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.200564 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 11:39:56.192293189 +0000 UTC Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.221861 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.222019 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:37 crc kubenswrapper[4870]: E0216 17:00:37.222050 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:37 crc kubenswrapper[4870]: E0216 17:00:37.222265 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.222717 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:37 crc kubenswrapper[4870]: E0216 17:00:37.222877 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.247092 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.247142 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.247158 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.247180 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.247193 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:37Z","lastTransitionTime":"2026-02-16T17:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.350312 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.350369 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.350386 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.350408 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.350424 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:37Z","lastTransitionTime":"2026-02-16T17:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.454048 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.454127 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.454146 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.454169 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.454195 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:37Z","lastTransitionTime":"2026-02-16T17:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.558828 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.558920 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.558984 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.559024 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.559046 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:37Z","lastTransitionTime":"2026-02-16T17:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.661699 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.661742 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.661751 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.661811 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.661824 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:37Z","lastTransitionTime":"2026-02-16T17:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.765046 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.765138 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.765151 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.765166 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.765177 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:37Z","lastTransitionTime":"2026-02-16T17:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.870051 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.870107 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.870125 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.870147 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.870164 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:37Z","lastTransitionTime":"2026-02-16T17:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.973528 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.973588 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.973602 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.973621 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:37 crc kubenswrapper[4870]: I0216 17:00:37.973634 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:37Z","lastTransitionTime":"2026-02-16T17:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.076397 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.076493 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.076518 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.076554 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.076580 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:38Z","lastTransitionTime":"2026-02-16T17:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.180278 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.180329 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.180338 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.180361 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.180372 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:38Z","lastTransitionTime":"2026-02-16T17:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.201194 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 23:25:02.260592207 +0000 UTC Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.222818 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:38 crc kubenswrapper[4870]: E0216 17:00:38.223060 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.284610 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.284660 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.284671 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.284694 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.284709 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:38Z","lastTransitionTime":"2026-02-16T17:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.387986 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.388034 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.388043 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.388062 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.388075 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:38Z","lastTransitionTime":"2026-02-16T17:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.490190 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.490228 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.490239 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.490258 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.490270 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:38Z","lastTransitionTime":"2026-02-16T17:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.593117 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.593181 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.593200 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.593227 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.593247 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:38Z","lastTransitionTime":"2026-02-16T17:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.696895 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.697010 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.697029 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.697057 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.697077 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:38Z","lastTransitionTime":"2026-02-16T17:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.799679 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.799741 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.799758 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.799787 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.799804 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:38Z","lastTransitionTime":"2026-02-16T17:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.903629 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.903693 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.903711 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.903736 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:38 crc kubenswrapper[4870]: I0216 17:00:38.903753 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:38Z","lastTransitionTime":"2026-02-16T17:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.006299 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.006370 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.006398 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.006438 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.006462 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:39Z","lastTransitionTime":"2026-02-16T17:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.110243 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.110340 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.110362 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.110395 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.110417 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:39Z","lastTransitionTime":"2026-02-16T17:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.201891 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:34:11.860150296 +0000 UTC Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.213350 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.213395 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.213409 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.213429 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.213442 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:39Z","lastTransitionTime":"2026-02-16T17:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.222734 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.222809 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.222761 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:39 crc kubenswrapper[4870]: E0216 17:00:39.222925 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:39 crc kubenswrapper[4870]: E0216 17:00:39.223029 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:39 crc kubenswrapper[4870]: E0216 17:00:39.223114 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.316307 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.316362 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.316374 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.316397 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.316410 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:39Z","lastTransitionTime":"2026-02-16T17:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.405212 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs\") pod \"network-metrics-daemon-zsfxc\" (UID: \"d13b0b83-258a-4545-b358-e08252dbbe87\") " pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:39 crc kubenswrapper[4870]: E0216 17:00:39.405487 4870 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:39 crc kubenswrapper[4870]: E0216 17:00:39.405671 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs podName:d13b0b83-258a-4545-b358-e08252dbbe87 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:47.405628711 +0000 UTC m=+51.889093125 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs") pod "network-metrics-daemon-zsfxc" (UID: "d13b0b83-258a-4545-b358-e08252dbbe87") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.419719 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.419782 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.419797 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.419816 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.419828 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:39Z","lastTransitionTime":"2026-02-16T17:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.522820 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.522862 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.522871 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.522890 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.522903 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:39Z","lastTransitionTime":"2026-02-16T17:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.626306 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.626353 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.626367 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.626387 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.626401 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:39Z","lastTransitionTime":"2026-02-16T17:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.729617 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.729663 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.729674 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.729691 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.729704 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:39Z","lastTransitionTime":"2026-02-16T17:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.833063 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.833721 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.833745 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.833772 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.833822 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:39Z","lastTransitionTime":"2026-02-16T17:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.937696 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.937763 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.937775 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.937804 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:39 crc kubenswrapper[4870]: I0216 17:00:39.937815 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:39Z","lastTransitionTime":"2026-02-16T17:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.041070 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.041124 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.041136 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.041157 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.041169 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:40Z","lastTransitionTime":"2026-02-16T17:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.144843 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.145124 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.145134 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.145153 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.145167 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:40Z","lastTransitionTime":"2026-02-16T17:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.202796 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 03:47:14.28018367 +0000 UTC Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.222579 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:40 crc kubenswrapper[4870]: E0216 17:00:40.222755 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.248696 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.248773 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.248800 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.248835 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.248863 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:40Z","lastTransitionTime":"2026-02-16T17:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.351694 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.351743 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.351755 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.351772 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.351785 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:40Z","lastTransitionTime":"2026-02-16T17:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.454579 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.454644 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.454659 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.454679 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.454705 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:40Z","lastTransitionTime":"2026-02-16T17:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.558319 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.558416 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.558436 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.558466 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.558485 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:40Z","lastTransitionTime":"2026-02-16T17:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.661796 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.661862 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.661874 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.661893 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.662168 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:40Z","lastTransitionTime":"2026-02-16T17:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.766870 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.766994 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.767011 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.767413 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.767444 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:40Z","lastTransitionTime":"2026-02-16T17:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.870899 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.870977 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.870991 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.871012 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.871025 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:40Z","lastTransitionTime":"2026-02-16T17:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.972994 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.973045 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.973061 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.973079 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:40 crc kubenswrapper[4870]: I0216 17:00:40.973092 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:40Z","lastTransitionTime":"2026-02-16T17:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.075991 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.076040 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.076052 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.076074 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.076090 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:41Z","lastTransitionTime":"2026-02-16T17:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.178982 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.179026 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.179059 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.179317 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.179396 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:41Z","lastTransitionTime":"2026-02-16T17:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.203478 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 04:33:47.972349982 +0000 UTC Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.222010 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.222034 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:41 crc kubenswrapper[4870]: E0216 17:00:41.222184 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:00:41 crc kubenswrapper[4870]: E0216 17:00:41.222232 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.222601 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:41 crc kubenswrapper[4870]: E0216 17:00:41.222732 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.281811 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.281858 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.281868 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.281889 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.281901 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:41Z","lastTransitionTime":"2026-02-16T17:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.385577 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.385711 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.385782 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.385815 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.385876 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:41Z","lastTransitionTime":"2026-02-16T17:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.489652 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.489739 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.489777 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.489815 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.489838 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:41Z","lastTransitionTime":"2026-02-16T17:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.592277 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.592345 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.592376 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.592408 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.592429 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:41Z","lastTransitionTime":"2026-02-16T17:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.695296 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.695325 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.695334 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.695350 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.695359 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:41Z","lastTransitionTime":"2026-02-16T17:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.798471 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.798557 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.798575 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.798611 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.798631 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:41Z","lastTransitionTime":"2026-02-16T17:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.902756 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.902832 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.902846 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.902870 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:41 crc kubenswrapper[4870]: I0216 17:00:41.902885 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:41Z","lastTransitionTime":"2026-02-16T17:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.006164 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.006266 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.006286 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.006313 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.006334 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.109462 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.109513 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.109525 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.109540 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.109550 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.204130 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 11:53:06.876542449 +0000 UTC Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.211814 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.211855 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.211877 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.211898 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.211911 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.222600 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:42 crc kubenswrapper[4870]: E0216 17:00:42.222747 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.314467 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.314521 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.314532 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.314553 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.314590 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.416934 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.416999 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.417008 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.417026 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.417036 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.519938 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.520057 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.520084 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.520116 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.520147 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.623022 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.623084 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.623095 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.623118 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.623133 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.725861 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.725925 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.725968 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.725994 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.726011 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.828249 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.828298 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.828317 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.828344 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.828368 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.930652 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.930695 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.930711 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.930734 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:42 crc kubenswrapper[4870]: I0216 17:00:42.930745 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:42Z","lastTransitionTime":"2026-02-16T17:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.033848 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.033920 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.033970 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.034007 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.034028 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.137688 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.137747 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.137761 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.137785 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.137799 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.205207 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:31:03.70435635 +0000 UTC Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.222782 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.222791 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.222818 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:43 crc kubenswrapper[4870]: E0216 17:00:43.223040 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:43 crc kubenswrapper[4870]: E0216 17:00:43.223178 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:00:43 crc kubenswrapper[4870]: E0216 17:00:43.223312 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.241138 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.241203 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.241230 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.241258 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.241276 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.344680 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.344736 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.344752 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.344775 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.344792 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.448057 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.448133 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.448160 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.448260 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.448320 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.487783 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.487852 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.487870 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.487894 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.487909 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4870]: E0216 17:00:43.508678 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:43Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.513976 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.514049 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.514071 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.514100 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.514124 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4870]: E0216 17:00:43.527390 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:43Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.531829 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.531885 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.531894 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.531910 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.531940 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4870]: E0216 17:00:43.545524 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:43Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.549459 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.549517 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.549529 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.549545 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.549558 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4870]: E0216 17:00:43.561203 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:43Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.564828 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.564877 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.564891 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.564911 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.564922 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4870]: E0216 17:00:43.586670 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:43Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:43 crc kubenswrapper[4870]: E0216 17:00:43.586862 4870 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.589049 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.589099 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.589139 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.589163 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.589178 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.691674 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.691728 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.691739 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.691760 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.691791 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.793870 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.793931 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.793979 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.794003 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.794019 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.897217 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.897298 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.897323 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.897360 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:43 crc kubenswrapper[4870]: I0216 17:00:43.897385 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:43Z","lastTransitionTime":"2026-02-16T17:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.000358 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.000445 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.000467 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.000502 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.000525 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:44Z","lastTransitionTime":"2026-02-16T17:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.103780 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.103832 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.103841 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.103866 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.103879 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:44Z","lastTransitionTime":"2026-02-16T17:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.206204 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 05:43:55.833302036 +0000 UTC Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.206521 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.206554 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.206564 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.206583 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.206594 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:44Z","lastTransitionTime":"2026-02-16T17:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.222999 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:44 crc kubenswrapper[4870]: E0216 17:00:44.223157 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.309361 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.309492 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.309511 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.309538 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.309556 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:44Z","lastTransitionTime":"2026-02-16T17:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.413546 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.413596 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.413610 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.413635 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.413651 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:44Z","lastTransitionTime":"2026-02-16T17:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.516328 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.516377 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.516390 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.516406 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.516417 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:44Z","lastTransitionTime":"2026-02-16T17:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.618773 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.618807 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.618818 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.618835 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.618845 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:44Z","lastTransitionTime":"2026-02-16T17:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.721762 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.721821 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.721838 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.721866 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.721882 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:44Z","lastTransitionTime":"2026-02-16T17:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.825785 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.825866 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.825893 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.825986 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.826015 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:44Z","lastTransitionTime":"2026-02-16T17:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.928745 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.928791 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.928804 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.928827 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:44 crc kubenswrapper[4870]: I0216 17:00:44.928841 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:44Z","lastTransitionTime":"2026-02-16T17:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.031827 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.031904 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.031914 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.031932 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.031976 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:45Z","lastTransitionTime":"2026-02-16T17:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.134572 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.134658 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.134678 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.134707 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.134729 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:45Z","lastTransitionTime":"2026-02-16T17:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.206598 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 13:44:37.053394675 +0000 UTC Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.221881 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.221930 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.222025 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:45 crc kubenswrapper[4870]: E0216 17:00:45.222069 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:45 crc kubenswrapper[4870]: E0216 17:00:45.222163 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:45 crc kubenswrapper[4870]: E0216 17:00:45.222272 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.237636 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.237729 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.237747 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.237903 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.237921 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:45Z","lastTransitionTime":"2026-02-16T17:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.340833 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.340888 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.340902 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.340921 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.340937 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:45Z","lastTransitionTime":"2026-02-16T17:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.443554 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.443602 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.443650 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.443699 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.443716 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:45Z","lastTransitionTime":"2026-02-16T17:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.545992 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.546053 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.546064 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.546081 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.546093 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:45Z","lastTransitionTime":"2026-02-16T17:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.648901 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.649006 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.649025 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.649050 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.649068 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:45Z","lastTransitionTime":"2026-02-16T17:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.752370 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.752450 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.752473 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.752503 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.752525 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:45Z","lastTransitionTime":"2026-02-16T17:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.855529 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.855586 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.855607 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.855632 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.855647 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:45Z","lastTransitionTime":"2026-02-16T17:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.958923 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.959064 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.959087 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.959119 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:45 crc kubenswrapper[4870]: I0216 17:00:45.959142 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:45Z","lastTransitionTime":"2026-02-16T17:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.063043 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.063110 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.063128 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.063157 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.063176 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:46Z","lastTransitionTime":"2026-02-16T17:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.167807 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.167889 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.167900 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.167923 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.167980 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:46Z","lastTransitionTime":"2026-02-16T17:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.207387 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 04:14:39.606010289 +0000 UTC Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.226047 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:46 crc kubenswrapper[4870]: E0216 17:00:46.226737 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.250982 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.272210 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.272258 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.272275 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.272304 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.272328 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:46Z","lastTransitionTime":"2026-02-16T17:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.275064 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.296857 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.323425 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.351463 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.367365 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.375842 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.375891 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.375906 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.375928 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.375957 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:46Z","lastTransitionTime":"2026-02-16T17:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.381861 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.400097 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.416840 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99cf6abd32755d5f425791ef831241917e3c00c3deec55aa0938228a2c1391c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368d8655cd750405c9ca8c279f2bdb55e6d700aa73caf439a5cbee5fd5e0f3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.443443 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.459288 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.472615 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.478738 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.478788 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.478799 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.478817 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.478831 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:46Z","lastTransitionTime":"2026-02-16T17:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.483546 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.505114 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"pod openshift-image-registry/node-ca-9zmm6\\\\nI0216 17:00:32.762353 6340 services_controller.go:444] Built service openshift-marketplace/community-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0216 17:00:32.760007 6340 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z]\\\\nI0216 17:00:32.762360 6340 services_controller.go:445] Built service openshift-marketplace/community-operators LB template configs for network=default: []services.lbConfig(nil)\\\\nI0216 17:00:32.762371 6340 services_controller.go:451] Built service openshift-marketplace/community-operator\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-drrrv_openshift-ovn-kubernetes(650bce90-73d6-474d-ab19-f50252dc8bc3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.523519 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.537528 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.554108 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13b0b83-258a-4545-b358-e08252dbbe87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zsfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:46Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.581589 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.581658 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.581682 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.581722 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.581753 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:46Z","lastTransitionTime":"2026-02-16T17:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.685190 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.685243 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.685258 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.685283 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.685301 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:46Z","lastTransitionTime":"2026-02-16T17:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.788153 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.788210 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.788231 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.788257 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.788281 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:46Z","lastTransitionTime":"2026-02-16T17:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.891510 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.891585 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.891603 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.891630 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.891648 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:46Z","lastTransitionTime":"2026-02-16T17:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.994756 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.994840 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.994877 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.994911 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:46 crc kubenswrapper[4870]: I0216 17:00:46.994933 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:46Z","lastTransitionTime":"2026-02-16T17:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.098487 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.098585 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.098611 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.098638 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.098658 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:47Z","lastTransitionTime":"2026-02-16T17:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.201048 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.201117 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.201132 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.201153 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.201171 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:47Z","lastTransitionTime":"2026-02-16T17:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.207596 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 01:24:34.394630204 +0000 UTC Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.221910 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.222046 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:47 crc kubenswrapper[4870]: E0216 17:00:47.222133 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.222180 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:47 crc kubenswrapper[4870]: E0216 17:00:47.222422 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:47 crc kubenswrapper[4870]: E0216 17:00:47.222584 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.304062 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.304113 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.304124 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.304144 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.304157 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:47Z","lastTransitionTime":"2026-02-16T17:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.407749 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.407829 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.407853 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.407888 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.407912 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:47Z","lastTransitionTime":"2026-02-16T17:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.429627 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs\") pod \"network-metrics-daemon-zsfxc\" (UID: \"d13b0b83-258a-4545-b358-e08252dbbe87\") " pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:47 crc kubenswrapper[4870]: E0216 17:00:47.430026 4870 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:47 crc kubenswrapper[4870]: E0216 17:00:47.430211 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs podName:d13b0b83-258a-4545-b358-e08252dbbe87 nodeName:}" failed. No retries permitted until 2026-02-16 17:01:03.430166593 +0000 UTC m=+67.913631177 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs") pod "network-metrics-daemon-zsfxc" (UID: "d13b0b83-258a-4545-b358-e08252dbbe87") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.510817 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.510930 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.510988 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.511028 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.511113 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:47Z","lastTransitionTime":"2026-02-16T17:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.614474 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.614545 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.614572 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.614607 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.614654 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:47Z","lastTransitionTime":"2026-02-16T17:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.718669 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.718734 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.718752 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.718783 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.718801 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:47Z","lastTransitionTime":"2026-02-16T17:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.821860 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.821904 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.821915 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.821934 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.821972 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:47Z","lastTransitionTime":"2026-02-16T17:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.925077 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.925518 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.925804 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.926094 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:47 crc kubenswrapper[4870]: I0216 17:00:47.926313 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:47Z","lastTransitionTime":"2026-02-16T17:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.030155 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.030220 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.030239 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.030269 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.030287 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:48Z","lastTransitionTime":"2026-02-16T17:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.119712 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.133987 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.134035 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.134050 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.134067 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.134081 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:48Z","lastTransitionTime":"2026-02-16T17:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.138735 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:00:48 crc kubenswrapper[4870]: E0216 17:00:48.138900 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:01:20.138869968 +0000 UTC m=+84.622334362 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.139013 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.139081 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.139124 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.139185 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:48 crc kubenswrapper[4870]: E0216 17:00:48.139344 4870 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:48 crc kubenswrapper[4870]: E0216 17:00:48.139435 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:01:20.139408233 +0000 UTC m=+84.622872657 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:00:48 crc kubenswrapper[4870]: E0216 17:00:48.139927 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:48 crc kubenswrapper[4870]: E0216 17:00:48.140007 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:48 crc kubenswrapper[4870]: E0216 17:00:48.140029 4870 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:48 crc kubenswrapper[4870]: E0216 17:00:48.140087 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 17:01:20.140069622 +0000 UTC m=+84.623534046 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:48 crc kubenswrapper[4870]: E0216 17:00:48.140669 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:00:48 crc kubenswrapper[4870]: E0216 17:00:48.140703 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:00:48 crc kubenswrapper[4870]: E0216 17:00:48.140721 4870 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:48 crc kubenswrapper[4870]: E0216 17:00:48.140766 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 17:01:20.140755781 +0000 UTC m=+84.624220175 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:00:48 crc kubenswrapper[4870]: E0216 17:00:48.140686 4870 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:48 crc kubenswrapper[4870]: E0216 17:00:48.141415 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:01:20.141372838 +0000 UTC m=+84.624837402 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.146732 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.168056 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.188533 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.208234 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:16:44.665742774 +0000 UTC Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.213393 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.222906 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:48 crc kubenswrapper[4870]: E0216 17:00:48.223129 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.223353 4870 scope.go:117] "RemoveContainer" containerID="484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.235984 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.236046 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.236059 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.236082 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.236097 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:48Z","lastTransitionTime":"2026-02-16T17:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.244648 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.260381 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.273394 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.289932 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.304338 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99cf6abd32755d5f425791ef831241917e3c00c3deec55aa0938228a2c1391c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368d8655cd750405c9ca8c279f2bdb55e6d700aa73caf439a5cbee5fd5e0f3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.324163 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.339372 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.339454 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.339473 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.339500 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.339516 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:48Z","lastTransitionTime":"2026-02-16T17:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.341622 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.355781 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.371591 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.392780 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"pod openshift-image-registry/node-ca-9zmm6\\\\nI0216 17:00:32.762353 6340 services_controller.go:444] Built service openshift-marketplace/community-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0216 17:00:32.760007 6340 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z]\\\\nI0216 17:00:32.762360 6340 services_controller.go:445] Built service openshift-marketplace/community-operators LB template configs for network=default: []services.lbConfig(nil)\\\\nI0216 17:00:32.762371 6340 services_controller.go:451] Built service openshift-marketplace/community-operator\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-drrrv_openshift-ovn-kubernetes(650bce90-73d6-474d-ab19-f50252dc8bc3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.406516 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.419708 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.432197 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13b0b83-258a-4545-b358-e08252dbbe87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zsfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.447819 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.447871 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.447884 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.448132 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.448158 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:48Z","lastTransitionTime":"2026-02-16T17:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.550857 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.550904 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.550916 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.550979 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.550991 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:48Z","lastTransitionTime":"2026-02-16T17:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.613919 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovnkube-controller/1.log" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.616821 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerStarted","Data":"3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f"} Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.617669 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.637623 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.654246 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.654310 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.654320 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.654338 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.654350 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:48Z","lastTransitionTime":"2026-02-16T17:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.658119 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99cf6abd32755d5f425791ef831241917e3c00c3deec55aa0938228a2c1391c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368d8655cd750405c9ca8c279f2bdb55e6d700aa73caf439a5cbee5fd5e0f3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.684011 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.700824 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.718755 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.738466 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.757593 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.757683 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.757698 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.757721 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.757760 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:48Z","lastTransitionTime":"2026-02-16T17:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.768319 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"pod openshift-image-registry/node-ca-9zmm6\\\\nI0216 17:00:32.762353 6340 services_controller.go:444] Built service openshift-marketplace/community-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0216 17:00:32.760007 6340 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z]\\\\nI0216 17:00:32.762360 6340 services_controller.go:445] Built service openshift-marketplace/community-operators LB template configs for network=default: []services.lbConfig(nil)\\\\nI0216 17:00:32.762371 6340 services_controller.go:451] Built service openshift-marketplace/community-operator\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.785654 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.799430 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.814025 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.828650 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.841653 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.853663 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13b0b83-258a-4545-b358-e08252dbbe87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zsfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.861218 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.861252 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.861262 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.861279 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.861289 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:48Z","lastTransitionTime":"2026-02-16T17:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.866170 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.879808 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.895295 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.909492 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:48Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.963332 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.963409 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.963425 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.963447 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:48 crc kubenswrapper[4870]: I0216 17:00:48.963459 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:48Z","lastTransitionTime":"2026-02-16T17:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.067103 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.067168 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.067189 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.067214 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.067235 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:49Z","lastTransitionTime":"2026-02-16T17:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.170313 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.170364 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.170376 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.170397 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.170410 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:49Z","lastTransitionTime":"2026-02-16T17:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.209102 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 23:51:22.522277337 +0000 UTC Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.222615 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.222714 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:49 crc kubenswrapper[4870]: E0216 17:00:49.222838 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.223028 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:49 crc kubenswrapper[4870]: E0216 17:00:49.223146 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:49 crc kubenswrapper[4870]: E0216 17:00:49.223458 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.274606 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.275020 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.275261 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.275402 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.275534 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:49Z","lastTransitionTime":"2026-02-16T17:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.378655 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.378724 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.378743 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.378770 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.378791 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:49Z","lastTransitionTime":"2026-02-16T17:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.482381 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.482502 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.482564 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.482600 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.482661 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:49Z","lastTransitionTime":"2026-02-16T17:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.586384 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.586821 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.587009 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.587173 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.587341 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:49Z","lastTransitionTime":"2026-02-16T17:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.623999 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovnkube-controller/2.log" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.626000 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovnkube-controller/1.log" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.632691 4870 generic.go:334] "Generic (PLEG): container finished" podID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerID="3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f" exitCode=1 Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.633065 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerDied","Data":"3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f"} Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.633285 4870 scope.go:117] "RemoveContainer" containerID="484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.635211 4870 scope.go:117] "RemoveContainer" containerID="3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f" Feb 16 17:00:49 crc kubenswrapper[4870]: E0216 17:00:49.635577 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-drrrv_openshift-ovn-kubernetes(650bce90-73d6-474d-ab19-f50252dc8bc3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.664836 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.685678 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.691411 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.691458 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.691472 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.691493 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.691507 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:49Z","lastTransitionTime":"2026-02-16T17:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.700663 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99cf6abd32755d5f425791ef831241917e3c00c3deec55aa0938228a2c1391c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368d8655cd750405c9ca8c279f2bdb55e6d700aa73caf439a5cbee5fd5e0f3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.731766 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.746235 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.760519 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.775098 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.794702 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.794778 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.794801 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.794834 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.794857 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:49Z","lastTransitionTime":"2026-02-16T17:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.809434 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"pod openshift-image-registry/node-ca-9zmm6\\\\nI0216 17:00:32.762353 6340 services_controller.go:444] Built service openshift-marketplace/community-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0216 17:00:32.760007 6340 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z]\\\\nI0216 17:00:32.762360 6340 services_controller.go:445] Built service openshift-marketplace/community-operators LB template configs for network=default: []services.lbConfig(nil)\\\\nI0216 17:00:32.762371 6340 services_controller.go:451] Built service openshift-marketplace/community-operator\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:49Z\\\",\\\"message\\\":\\\"mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 17:00:49.152089 6562 services_controller.go:445] Built service openshift-kube-storage-version-migrator-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0216 17:00:49.152100 6562 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z]\\\\nI0216 17:00:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.832933 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.849785 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.863647 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13b0b83-258a-4545-b358-e08252dbbe87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zsfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.879879 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.894477 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.897468 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.897543 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.897562 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.897582 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.897594 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:49Z","lastTransitionTime":"2026-02-16T17:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.911472 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.927775 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.945173 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:49 crc kubenswrapper[4870]: I0216 17:00:49.963216 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.000212 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.000287 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.000307 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.000340 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.000394 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:50Z","lastTransitionTime":"2026-02-16T17:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.104096 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.104198 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.104223 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.104263 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.104289 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:50Z","lastTransitionTime":"2026-02-16T17:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.158237 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.177500 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.185684 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.208786 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.208849 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.208869 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.208899 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.208917 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:50Z","lastTransitionTime":"2026-02-16T17:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.209053 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.209318 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:33:30.693883122 +0000 UTC Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.222587 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:50 crc kubenswrapper[4870]: E0216 17:00:50.222782 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.233165 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.263041 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.296994 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.313067 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.313160 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.313184 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.313219 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.313240 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:50Z","lastTransitionTime":"2026-02-16T17:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.318162 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.335622 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.355469 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.371671 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99cf6abd32755d5f425791ef831241917e3c00c3deec55aa0938228a2c1391c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368d8655cd750405c9ca8c279f2bdb55e6d700aa73caf439a5cbee5fd5e0f3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.390014 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.407101 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.416013 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.416100 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.416122 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.416153 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.416175 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:50Z","lastTransitionTime":"2026-02-16T17:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.424287 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.437802 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.464004 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://484107d4245ea58d1fa2b79a0a461ba0ef1d908dfb212b4dbf66e58fa937577e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"message\\\":\\\"pod openshift-image-registry/node-ca-9zmm6\\\\nI0216 17:00:32.762353 6340 services_controller.go:444] Built service openshift-marketplace/community-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0216 17:00:32.760007 6340 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:32Z is after 2025-08-24T17:21:41Z]\\\\nI0216 17:00:32.762360 6340 services_controller.go:445] Built service openshift-marketplace/community-operators LB template configs for network=default: []services.lbConfig(nil)\\\\nI0216 17:00:32.762371 6340 services_controller.go:451] Built service openshift-marketplace/community-operator\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:49Z\\\",\\\"message\\\":\\\"mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 17:00:49.152089 6562 services_controller.go:445] Built service openshift-kube-storage-version-migrator-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0216 17:00:49.152100 6562 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z]\\\\nI0216 17:00:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.479676 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.495119 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.507439 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13b0b83-258a-4545-b358-e08252dbbe87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zsfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.519332 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.519380 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.519390 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.519408 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.519421 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:50Z","lastTransitionTime":"2026-02-16T17:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.622171 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.622603 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.622747 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.622974 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.623118 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:50Z","lastTransitionTime":"2026-02-16T17:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.638836 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovnkube-controller/2.log" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.643505 4870 scope.go:117] "RemoveContainer" containerID="3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f" Feb 16 17:00:50 crc kubenswrapper[4870]: E0216 17:00:50.643731 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-drrrv_openshift-ovn-kubernetes(650bce90-73d6-474d-ab19-f50252dc8bc3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.658145 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.671218 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13b0b83-258a-4545-b358-e08252dbbe87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zsfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.690508 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.710221 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1daf779e-125b-4b65-a0a5-1d17de09d7f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a830b7770df5d6eb09cd233bf335c59c23fcc311afc7b2fb2df511ee4f7a1f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22759612b983e0c11f0aef2e271a252d00fa8bcfafabf5176d17d29ba811f485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7349c946be2d324b8b6393304de50fff9f89a1143c1d4249e2c8bfacb1df1251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd45a46ea1111cb66e4e31da8f8b3905d1d203b93f16247aab378df4eec5481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd45a46ea1111cb66e4e31da8f8b3905d1d203b93f16247aab378df4eec5481\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.722714 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.725595 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.725636 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.725648 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.725666 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.725677 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:50Z","lastTransitionTime":"2026-02-16T17:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.738400 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.755663 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.775054 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.790576 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.809079 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.829023 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.829110 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.829134 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.829161 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.829185 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:50Z","lastTransitionTime":"2026-02-16T17:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.832031 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.850998 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99cf6abd32755d5f425791ef831241917e3c00c3deec55aa0938228a2c1391c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368d8655cd750405c9ca8c279f2bdb55e6d700aa73caf439a5cbee5fd5e0f3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.876451 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.896120 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.914941 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.929186 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.931840 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.931897 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.931911 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.931935 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.931985 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:50Z","lastTransitionTime":"2026-02-16T17:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.950562 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:49Z\\\",\\\"message\\\":\\\"mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 17:00:49.152089 6562 services_controller.go:445] Built service openshift-kube-storage-version-migrator-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0216 17:00:49.152100 6562 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z]\\\\nI0216 17:00:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-drrrv_openshift-ovn-kubernetes(650bce90-73d6-474d-ab19-f50252dc8bc3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:50 crc kubenswrapper[4870]: I0216 17:00:50.966271 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:50Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.034269 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.034614 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.034630 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.034647 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.034659 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:51Z","lastTransitionTime":"2026-02-16T17:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.138157 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.138200 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.138211 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.138231 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.138243 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:51Z","lastTransitionTime":"2026-02-16T17:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.210456 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 04:27:41.3043696 +0000 UTC Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.222942 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.223017 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:51 crc kubenswrapper[4870]: E0216 17:00:51.223141 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.223169 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:51 crc kubenswrapper[4870]: E0216 17:00:51.223361 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:00:51 crc kubenswrapper[4870]: E0216 17:00:51.223410 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.241603 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.241655 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.241674 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.241716 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.241740 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:51Z","lastTransitionTime":"2026-02-16T17:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.345512 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.345593 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.345610 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.345642 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.345661 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:51Z","lastTransitionTime":"2026-02-16T17:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.449177 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.449250 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.449264 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.449285 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.449298 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:51Z","lastTransitionTime":"2026-02-16T17:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.552115 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.552167 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.552177 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.552202 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.552211 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:51Z","lastTransitionTime":"2026-02-16T17:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.655300 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.655342 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.655351 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.655369 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.655380 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:51Z","lastTransitionTime":"2026-02-16T17:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.758608 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.758714 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.758744 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.758776 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.758800 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:51Z","lastTransitionTime":"2026-02-16T17:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.861426 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.861479 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.861491 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.861519 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.861531 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:51Z","lastTransitionTime":"2026-02-16T17:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.964448 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.964503 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.964515 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.964537 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:51 crc kubenswrapper[4870]: I0216 17:00:51.964549 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:51Z","lastTransitionTime":"2026-02-16T17:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.069714 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.069794 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.069816 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.069844 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.069867 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.183706 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.183797 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.183822 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.183858 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.183882 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.211121 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 08:41:06.829041373 +0000 UTC Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.222886 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:52 crc kubenswrapper[4870]: E0216 17:00:52.223185 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.286865 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.286934 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.286972 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.286996 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.287015 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.390718 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.390811 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.390836 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.390864 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.390884 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.494042 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.494084 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.494095 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.494115 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.494129 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.597799 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.597841 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.597857 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.597876 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.597888 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.700365 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.700490 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.700516 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.700554 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.700587 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.802891 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.803027 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.803054 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.803090 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.803117 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.906184 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.906255 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.906272 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.906302 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:52 crc kubenswrapper[4870]: I0216 17:00:52.906321 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:52Z","lastTransitionTime":"2026-02-16T17:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.009206 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.009265 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.009276 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.009298 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.009311 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.112648 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.112730 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.112747 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.112775 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.112798 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.211381 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 23:08:59.341463288 +0000 UTC Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.216493 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.216579 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.216598 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.216627 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.216649 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.222889 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.222997 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.223030 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:53 crc kubenswrapper[4870]: E0216 17:00:53.223146 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:53 crc kubenswrapper[4870]: E0216 17:00:53.223360 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:53 crc kubenswrapper[4870]: E0216 17:00:53.223460 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.319744 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.319778 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.319786 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.319804 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.319814 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.423722 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.423766 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.423779 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.423801 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.423814 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.526792 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.526845 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.526856 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.526875 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.526888 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.628751 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.628789 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.628800 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.628822 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.628836 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4870]: E0216 17:00:53.642848 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:53Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.647380 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.647450 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.647471 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.647498 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.647511 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4870]: E0216 17:00:53.662831 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:53Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.667179 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.667241 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.667255 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.667284 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.667298 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4870]: E0216 17:00:53.681388 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:53Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.685664 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.685711 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.685722 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.685743 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.685762 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4870]: E0216 17:00:53.700143 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:53Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.704640 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.704678 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.704690 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.704710 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.704722 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4870]: E0216 17:00:53.718231 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:53Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:53 crc kubenswrapper[4870]: E0216 17:00:53.718360 4870 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.720237 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.720316 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.720329 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.720358 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.720374 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.823421 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.823471 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.823480 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.823501 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.823518 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.927381 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.927453 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.927472 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.927511 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:53 crc kubenswrapper[4870]: I0216 17:00:53.927552 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:53Z","lastTransitionTime":"2026-02-16T17:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.031019 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.031069 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.031084 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.031106 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.031153 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:54Z","lastTransitionTime":"2026-02-16T17:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.134425 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.134507 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.134531 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.134562 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.134586 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:54Z","lastTransitionTime":"2026-02-16T17:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.211997 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 03:04:05.937067448 +0000 UTC Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.223038 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:54 crc kubenswrapper[4870]: E0216 17:00:54.223321 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.238177 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.238237 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.238258 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.238284 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.238311 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:54Z","lastTransitionTime":"2026-02-16T17:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.342036 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.342102 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.342120 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.342146 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.342167 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:54Z","lastTransitionTime":"2026-02-16T17:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.445621 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.445669 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.445682 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.445702 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.445717 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:54Z","lastTransitionTime":"2026-02-16T17:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.548737 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.548804 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.548824 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.548848 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.548864 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:54Z","lastTransitionTime":"2026-02-16T17:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.651868 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.651913 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.651925 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.651971 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.651987 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:54Z","lastTransitionTime":"2026-02-16T17:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.755455 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.755528 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.755550 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.755571 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.755585 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:54Z","lastTransitionTime":"2026-02-16T17:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.859699 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.859778 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.859797 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.859825 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.859848 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:54Z","lastTransitionTime":"2026-02-16T17:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.963142 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.963242 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.963258 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.963310 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:54 crc kubenswrapper[4870]: I0216 17:00:54.963327 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:54Z","lastTransitionTime":"2026-02-16T17:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.067051 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.067350 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.067453 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.067530 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.067594 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:55Z","lastTransitionTime":"2026-02-16T17:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.171120 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.171188 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.171208 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.171241 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.171262 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:55Z","lastTransitionTime":"2026-02-16T17:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.212430 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:33:55.211559537 +0000 UTC Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.222902 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.222902 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:55 crc kubenswrapper[4870]: E0216 17:00:55.223206 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:55 crc kubenswrapper[4870]: E0216 17:00:55.223326 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.223359 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:55 crc kubenswrapper[4870]: E0216 17:00:55.223673 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.274736 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.274811 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.274836 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.274869 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.274893 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:55Z","lastTransitionTime":"2026-02-16T17:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.378232 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.378282 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.378296 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.378313 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.378323 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:55Z","lastTransitionTime":"2026-02-16T17:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.481775 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.481825 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.481834 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.481852 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.481863 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:55Z","lastTransitionTime":"2026-02-16T17:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.584558 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.584613 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.584631 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.584655 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.584667 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:55Z","lastTransitionTime":"2026-02-16T17:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.688143 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.688204 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.688220 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.688247 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.688263 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:55Z","lastTransitionTime":"2026-02-16T17:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.791122 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.791201 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.791211 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.791231 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.791241 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:55Z","lastTransitionTime":"2026-02-16T17:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.894104 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.894179 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.894206 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.894240 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.894261 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:55Z","lastTransitionTime":"2026-02-16T17:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.996848 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.996895 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.996907 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.996924 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:55 crc kubenswrapper[4870]: I0216 17:00:55.996937 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:55Z","lastTransitionTime":"2026-02-16T17:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.099064 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.099139 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.099159 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.099187 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.099205 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:56Z","lastTransitionTime":"2026-02-16T17:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.203267 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.203325 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.203336 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.203355 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.203366 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:56Z","lastTransitionTime":"2026-02-16T17:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.213062 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 17:22:12.968537037 +0000 UTC Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.222255 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:56 crc kubenswrapper[4870]: E0216 17:00:56.222382 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.242190 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:56Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.271726 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:49Z\\\",\\\"message\\\":\\\"mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 17:00:49.152089 6562 services_controller.go:445] Built service openshift-kube-storage-version-migrator-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0216 17:00:49.152100 6562 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z]\\\\nI0216 17:00:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-drrrv_openshift-ovn-kubernetes(650bce90-73d6-474d-ab19-f50252dc8bc3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:56Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.292864 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:56Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.306191 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.306279 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.306296 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.306326 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.306346 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:56Z","lastTransitionTime":"2026-02-16T17:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.310262 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:56Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.325549 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:56Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.345044 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:56Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.364157 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:56Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.380273 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13b0b83-258a-4545-b358-e08252dbbe87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zsfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:56Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.395751 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:56Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.409794 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.409872 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.409891 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.409922 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.410037 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:56Z","lastTransitionTime":"2026-02-16T17:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.410779 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:56Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.426629 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:56Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.441282 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1daf779e-125b-4b65-a0a5-1d17de09d7f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a830b7770df5d6eb09cd233bf335c59c23fcc311afc7b2fb2df511ee4f7a1f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22759612b983e0c11f0aef2e271a252d00fa8bcfafabf5176d17d29ba811f485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7349c946be2d324b8b6393304de50fff9f89a1143c1d4249e2c8bfacb1df1251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd45a46ea1111cb66e4e31da8f8b3905d1d203b93f16247aab378df4eec5481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd45a46ea1111cb66e4e31da8f8b3905d1d203b93f16247aab378df4eec5481\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:56Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.461629 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:56Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.481384 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:56Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.500375 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99cf6abd32755d5f425791ef831241917e3c00c3deec55aa0938228a2c1391c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368d8655cd750405c9ca8c279f2bdb55e6d700aa73caf439a5cbee5fd5e0f3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:56Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.512190 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.512221 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.512230 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.512245 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.512256 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:56Z","lastTransitionTime":"2026-02-16T17:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.535571 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:56Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.555654 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:56Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.574277 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:56Z is after 2025-08-24T17:21:41Z" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.615293 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.615362 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.615381 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.615406 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.615423 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:56Z","lastTransitionTime":"2026-02-16T17:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.718593 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.718936 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.719026 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.719159 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.719257 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:56Z","lastTransitionTime":"2026-02-16T17:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.821965 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.822365 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.822378 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.822394 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.822404 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:56Z","lastTransitionTime":"2026-02-16T17:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.925421 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.925469 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.925478 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.925494 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:56 crc kubenswrapper[4870]: I0216 17:00:56.925507 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:56Z","lastTransitionTime":"2026-02-16T17:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.028376 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.028432 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.028446 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.028491 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.028509 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:57Z","lastTransitionTime":"2026-02-16T17:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.132129 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.132511 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.132643 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.132778 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.132905 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:57Z","lastTransitionTime":"2026-02-16T17:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.213481 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 17:36:19.159203771 +0000 UTC Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.223053 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.223079 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.223089 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:57 crc kubenswrapper[4870]: E0216 17:00:57.223713 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:00:57 crc kubenswrapper[4870]: E0216 17:00:57.223722 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:57 crc kubenswrapper[4870]: E0216 17:00:57.223923 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.235854 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.235887 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.235897 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.235914 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.235926 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:57Z","lastTransitionTime":"2026-02-16T17:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.338829 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.338888 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.338900 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.338921 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.338938 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:57Z","lastTransitionTime":"2026-02-16T17:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.442912 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.443538 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.443772 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.443935 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.444110 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:57Z","lastTransitionTime":"2026-02-16T17:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.547078 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.547464 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.547604 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.547692 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.547776 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:57Z","lastTransitionTime":"2026-02-16T17:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.651086 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.651150 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.651162 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.651185 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.651199 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:57Z","lastTransitionTime":"2026-02-16T17:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.753926 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.754249 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.754353 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.754428 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.754494 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:57Z","lastTransitionTime":"2026-02-16T17:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.864359 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.864524 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.864538 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.864555 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.864568 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:57Z","lastTransitionTime":"2026-02-16T17:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.967585 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.967659 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.967671 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.967720 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:57 crc kubenswrapper[4870]: I0216 17:00:57.967732 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:57Z","lastTransitionTime":"2026-02-16T17:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.072319 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.072358 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.072368 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.072384 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.072395 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:58Z","lastTransitionTime":"2026-02-16T17:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.174802 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.174841 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.174850 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.174864 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.174872 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:58Z","lastTransitionTime":"2026-02-16T17:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.214489 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 18:31:03.221676851 +0000 UTC Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.223631 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:00:58 crc kubenswrapper[4870]: E0216 17:00:58.223981 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.278236 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.278322 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.278346 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.278372 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.278389 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:58Z","lastTransitionTime":"2026-02-16T17:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.381879 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.382017 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.382040 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.382070 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.382091 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:58Z","lastTransitionTime":"2026-02-16T17:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.485750 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.485801 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.485812 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.485835 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.485859 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:58Z","lastTransitionTime":"2026-02-16T17:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.589243 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.589308 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.589327 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.589355 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.589375 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:58Z","lastTransitionTime":"2026-02-16T17:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.692091 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.692150 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.692165 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.692191 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.692207 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:58Z","lastTransitionTime":"2026-02-16T17:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.794832 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.794876 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.794887 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.794905 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.794917 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:58Z","lastTransitionTime":"2026-02-16T17:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.904969 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.905025 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.905037 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.905055 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:58 crc kubenswrapper[4870]: I0216 17:00:58.905072 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:58Z","lastTransitionTime":"2026-02-16T17:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.008174 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.008262 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.008286 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.008325 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.008366 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:59Z","lastTransitionTime":"2026-02-16T17:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.111648 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.111710 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.111727 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.111754 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.111775 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:59Z","lastTransitionTime":"2026-02-16T17:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.214628 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:29:52.099347221 +0000 UTC Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.214840 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.214867 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.214876 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.214893 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.214902 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:59Z","lastTransitionTime":"2026-02-16T17:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.222122 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.222141 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.222131 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:00:59 crc kubenswrapper[4870]: E0216 17:00:59.222259 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:00:59 crc kubenswrapper[4870]: E0216 17:00:59.222356 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:00:59 crc kubenswrapper[4870]: E0216 17:00:59.222483 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.317681 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.317739 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.317752 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.317797 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.317811 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:59Z","lastTransitionTime":"2026-02-16T17:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.423037 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.423098 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.423115 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.423139 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.423158 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:59Z","lastTransitionTime":"2026-02-16T17:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.526087 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.526156 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.526174 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.526217 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.526249 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:59Z","lastTransitionTime":"2026-02-16T17:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.642969 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.643020 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.643036 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.643062 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.643077 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:59Z","lastTransitionTime":"2026-02-16T17:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.746668 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.746721 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.746731 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.746748 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.746762 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:59Z","lastTransitionTime":"2026-02-16T17:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.849680 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.849729 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.849745 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.849789 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.849801 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:59Z","lastTransitionTime":"2026-02-16T17:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.952830 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.952901 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.952914 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.952934 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:00:59 crc kubenswrapper[4870]: I0216 17:00:59.952976 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:00:59Z","lastTransitionTime":"2026-02-16T17:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.055908 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.055969 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.055979 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.055995 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.056007 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:00Z","lastTransitionTime":"2026-02-16T17:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.159517 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.159582 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.159596 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.159617 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.159631 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:00Z","lastTransitionTime":"2026-02-16T17:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.215106 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 13:32:58.780325806 +0000 UTC Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.222473 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:00 crc kubenswrapper[4870]: E0216 17:01:00.222633 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.262244 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.262311 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.262328 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.262350 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.262366 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:00Z","lastTransitionTime":"2026-02-16T17:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.365057 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.365108 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.365118 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.365145 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.365156 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:00Z","lastTransitionTime":"2026-02-16T17:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.468062 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.468104 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.468114 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.468129 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.468140 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:00Z","lastTransitionTime":"2026-02-16T17:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.570987 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.571040 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.571051 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.571079 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.571092 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:00Z","lastTransitionTime":"2026-02-16T17:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.674327 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.674370 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.674383 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.674402 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.674415 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:00Z","lastTransitionTime":"2026-02-16T17:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.776939 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.776997 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.777009 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.777031 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.777050 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:00Z","lastTransitionTime":"2026-02-16T17:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.880148 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.880200 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.880209 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.880229 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.880244 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:00Z","lastTransitionTime":"2026-02-16T17:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.982785 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.982836 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.982845 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.982862 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:00 crc kubenswrapper[4870]: I0216 17:01:00.982872 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:00Z","lastTransitionTime":"2026-02-16T17:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.089614 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.089672 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.089684 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.089711 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.089723 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:01Z","lastTransitionTime":"2026-02-16T17:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.192532 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.192591 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.192609 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.192633 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.192648 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:01Z","lastTransitionTime":"2026-02-16T17:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.215981 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 17:45:00.787414394 +0000 UTC Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.222355 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.222383 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.222478 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:01 crc kubenswrapper[4870]: E0216 17:01:01.222540 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:01 crc kubenswrapper[4870]: E0216 17:01:01.222666 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:01 crc kubenswrapper[4870]: E0216 17:01:01.222773 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.295097 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.295147 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.295164 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.295182 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.295195 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:01Z","lastTransitionTime":"2026-02-16T17:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.398277 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.398331 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.398342 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.398359 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.398372 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:01Z","lastTransitionTime":"2026-02-16T17:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.501393 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.501444 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.501454 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.501473 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.501483 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:01Z","lastTransitionTime":"2026-02-16T17:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.604969 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.605031 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.605042 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.605068 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.605080 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:01Z","lastTransitionTime":"2026-02-16T17:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.707830 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.707884 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.707893 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.707912 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.707925 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:01Z","lastTransitionTime":"2026-02-16T17:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.811075 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.811140 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.811155 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.811178 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.811193 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:01Z","lastTransitionTime":"2026-02-16T17:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.914091 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.914156 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.914173 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.914195 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:01 crc kubenswrapper[4870]: I0216 17:01:01.914214 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:01Z","lastTransitionTime":"2026-02-16T17:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.016963 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.017014 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.017024 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.017043 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.017055 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:02Z","lastTransitionTime":"2026-02-16T17:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.119820 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.119892 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.119908 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.119928 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.119958 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:02Z","lastTransitionTime":"2026-02-16T17:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.216901 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 05:16:25.504076357 +0000 UTC Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.222262 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:02 crc kubenswrapper[4870]: E0216 17:01:02.222427 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.223157 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.223188 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.223201 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.223217 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.223227 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:02Z","lastTransitionTime":"2026-02-16T17:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.326313 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.326373 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.326385 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.326403 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.326416 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:02Z","lastTransitionTime":"2026-02-16T17:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.429552 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.429619 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.429631 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.429650 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.429663 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:02Z","lastTransitionTime":"2026-02-16T17:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.532970 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.533020 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.533032 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.533051 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.533064 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:02Z","lastTransitionTime":"2026-02-16T17:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.635370 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.635433 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.635450 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.635476 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.635493 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:02Z","lastTransitionTime":"2026-02-16T17:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.738127 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.738185 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.738203 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.738256 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.738276 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:02Z","lastTransitionTime":"2026-02-16T17:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.840920 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.841033 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.841065 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.841171 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.841197 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:02Z","lastTransitionTime":"2026-02-16T17:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.944478 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.944552 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.944564 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.944587 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:02 crc kubenswrapper[4870]: I0216 17:01:02.944601 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:02Z","lastTransitionTime":"2026-02-16T17:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.047578 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.047627 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.047637 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.047655 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.047665 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.151264 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.151379 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.151394 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.151413 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.151424 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.217661 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 02:53:22.791444161 +0000 UTC Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.222280 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.222343 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:03 crc kubenswrapper[4870]: E0216 17:01:03.222438 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:03 crc kubenswrapper[4870]: E0216 17:01:03.222643 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.222800 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:03 crc kubenswrapper[4870]: E0216 17:01:03.223073 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.254212 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.254288 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.254301 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.254323 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.254338 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.357297 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.357361 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.357372 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.357392 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.357403 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.436282 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs\") pod \"network-metrics-daemon-zsfxc\" (UID: \"d13b0b83-258a-4545-b358-e08252dbbe87\") " pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:03 crc kubenswrapper[4870]: E0216 17:01:03.436447 4870 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:01:03 crc kubenswrapper[4870]: E0216 17:01:03.436518 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs podName:d13b0b83-258a-4545-b358-e08252dbbe87 nodeName:}" failed. No retries permitted until 2026-02-16 17:01:35.436500027 +0000 UTC m=+99.919964411 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs") pod "network-metrics-daemon-zsfxc" (UID: "d13b0b83-258a-4545-b358-e08252dbbe87") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.460379 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.460457 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.460475 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.460504 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.460523 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.563397 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.563480 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.563495 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.563522 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.563536 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.666560 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.666612 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.666626 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.666649 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.666664 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.770015 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.770058 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.770068 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.770086 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.770098 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.826569 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.826648 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.826668 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.826713 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.826751 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4870]: E0216 17:01:03.841159 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.845987 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.846330 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.846459 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.846659 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.846748 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4870]: E0216 17:01:03.863768 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.868162 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.868203 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.868215 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.868231 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.868242 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4870]: E0216 17:01:03.883366 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.891083 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.891437 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.891768 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.891799 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.891815 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4870]: E0216 17:01:03.907622 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.912757 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.912789 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.912799 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.912815 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.912825 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:03 crc kubenswrapper[4870]: E0216 17:01:03.928557 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:03Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:03 crc kubenswrapper[4870]: E0216 17:01:03.928690 4870 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.930937 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.931003 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.931017 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.931033 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:03 crc kubenswrapper[4870]: I0216 17:01:03.931046 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:03Z","lastTransitionTime":"2026-02-16T17:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.033704 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.033747 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.033762 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.033777 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.033789 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:04Z","lastTransitionTime":"2026-02-16T17:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.137551 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.137620 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.137633 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.137656 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.137671 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:04Z","lastTransitionTime":"2026-02-16T17:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.218775 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:44:20.615877312 +0000 UTC Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.222548 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:04 crc kubenswrapper[4870]: E0216 17:01:04.222698 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.240135 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.240175 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.240187 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.240204 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.240217 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:04Z","lastTransitionTime":"2026-02-16T17:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.343103 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.343229 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.343250 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.343280 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.343304 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:04Z","lastTransitionTime":"2026-02-16T17:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.447112 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.447189 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.447205 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.447226 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.447238 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:04Z","lastTransitionTime":"2026-02-16T17:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.550156 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.550492 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.550570 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.550639 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.550711 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:04Z","lastTransitionTime":"2026-02-16T17:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.653191 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.653526 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.653619 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.653695 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.653778 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:04Z","lastTransitionTime":"2026-02-16T17:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.756917 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.757008 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.757023 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.757048 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.757064 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:04Z","lastTransitionTime":"2026-02-16T17:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.859602 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.859658 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.859676 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.859699 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.859710 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:04Z","lastTransitionTime":"2026-02-16T17:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.962269 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.962319 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.962337 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.962364 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:04 crc kubenswrapper[4870]: I0216 17:01:04.962383 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:04Z","lastTransitionTime":"2026-02-16T17:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.065021 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.065069 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.065080 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.065100 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.065112 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:05Z","lastTransitionTime":"2026-02-16T17:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.168245 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.168296 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.168306 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.168327 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.168338 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:05Z","lastTransitionTime":"2026-02-16T17:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.219812 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 11:51:10.714289671 +0000 UTC Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.222580 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.222619 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.222672 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:05 crc kubenswrapper[4870]: E0216 17:01:05.222795 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.223007 4870 scope.go:117] "RemoveContainer" containerID="3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f" Feb 16 17:01:05 crc kubenswrapper[4870]: E0216 17:01:05.223072 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:05 crc kubenswrapper[4870]: E0216 17:01:05.223207 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:05 crc kubenswrapper[4870]: E0216 17:01:05.223705 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-drrrv_openshift-ovn-kubernetes(650bce90-73d6-474d-ab19-f50252dc8bc3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.271337 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.271376 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.271384 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.271399 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.271409 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:05Z","lastTransitionTime":"2026-02-16T17:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.374625 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.374691 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.374703 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.374727 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.374740 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:05Z","lastTransitionTime":"2026-02-16T17:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.477261 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.477708 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.477806 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.477912 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.478034 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:05Z","lastTransitionTime":"2026-02-16T17:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.581594 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.581648 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.581659 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.581680 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.581690 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:05Z","lastTransitionTime":"2026-02-16T17:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.684849 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.684935 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.684959 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.684977 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.684989 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:05Z","lastTransitionTime":"2026-02-16T17:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.787638 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.787712 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.787725 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.787749 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.787766 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:05Z","lastTransitionTime":"2026-02-16T17:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.890879 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.890931 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.890970 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.890999 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.891019 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:05Z","lastTransitionTime":"2026-02-16T17:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.993761 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.993815 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.993833 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.993866 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:05 crc kubenswrapper[4870]: I0216 17:01:05.993892 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:05Z","lastTransitionTime":"2026-02-16T17:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.097249 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.097337 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.097366 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.097404 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.097428 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:06Z","lastTransitionTime":"2026-02-16T17:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.200684 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.200725 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.200737 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.200753 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.200765 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:06Z","lastTransitionTime":"2026-02-16T17:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.221177 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 15:20:58.477874714 +0000 UTC Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.222727 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:06 crc kubenswrapper[4870]: E0216 17:01:06.222896 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.244262 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.257069 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.269999 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13b0b83-258a-4545-b358-e08252dbbe87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zsfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.282155 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1daf779e-125b-4b65-a0a5-1d17de09d7f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a830b7770df5d6eb09cd233bf335c59c23fcc311afc7b2fb2df511ee4f7a1f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22759612b983e0c11f0aef2e271a252d00fa8bcfafabf5176d17d29ba811f485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7349c946be2d324b8b6393304de50fff9f89a1143c1d4249e2c8bfacb1df1251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd45a46ea1111cb66e4e31da8f8b3905d1d203b93f16247aab378df4eec5481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd45a46ea1111cb66e4e31da8f8b3905d1d203b93f16247aab378df4eec5481\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.300694 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.303072 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.303162 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.303176 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.303195 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.303206 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:06Z","lastTransitionTime":"2026-02-16T17:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.318606 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.333299 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.350434 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.371393 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.385872 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.397687 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.405590 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.405641 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.405657 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.405678 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.405691 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:06Z","lastTransitionTime":"2026-02-16T17:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.413312 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.424573 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99cf6abd32755d5f425791ef831241917e3c00c3deec55aa0938228a2c1391c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368d8655cd750405c9ca8c279f2bdb55e6d700aa73caf439a5cbee5fd5e0f3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.440788 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.456479 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.470469 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.482623 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.505029 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:49Z\\\",\\\"message\\\":\\\"mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 17:00:49.152089 6562 services_controller.go:445] Built service openshift-kube-storage-version-migrator-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0216 17:00:49.152100 6562 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z]\\\\nI0216 17:00:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-drrrv_openshift-ovn-kubernetes(650bce90-73d6-474d-ab19-f50252dc8bc3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:06Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.508937 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.509012 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.509025 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.509043 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.509059 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:06Z","lastTransitionTime":"2026-02-16T17:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.612134 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.612199 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.612214 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.612264 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.612276 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:06Z","lastTransitionTime":"2026-02-16T17:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.714742 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.714787 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.714800 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.714826 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.714843 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:06Z","lastTransitionTime":"2026-02-16T17:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.818551 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.818602 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.818617 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.818643 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.818659 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:06Z","lastTransitionTime":"2026-02-16T17:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.922751 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.922812 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.922827 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.922846 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:06 crc kubenswrapper[4870]: I0216 17:01:06.922857 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:06Z","lastTransitionTime":"2026-02-16T17:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.026889 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.027010 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.027042 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.027081 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.027105 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:07Z","lastTransitionTime":"2026-02-16T17:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.129831 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.129889 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.129905 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.129925 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.129941 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:07Z","lastTransitionTime":"2026-02-16T17:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.222287 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 01:37:13.444043856 +0000 UTC Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.222385 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.222440 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:07 crc kubenswrapper[4870]: E0216 17:01:07.222627 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.222723 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:07 crc kubenswrapper[4870]: E0216 17:01:07.222885 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:07 crc kubenswrapper[4870]: E0216 17:01:07.223219 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.233804 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.233878 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.233896 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.233924 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.234242 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:07Z","lastTransitionTime":"2026-02-16T17:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.337528 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.337603 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.337622 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.337652 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.337672 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:07Z","lastTransitionTime":"2026-02-16T17:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.440752 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.440843 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.440866 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.440904 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.440927 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:07Z","lastTransitionTime":"2026-02-16T17:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.544487 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.544535 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.544548 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.544575 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.544628 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:07Z","lastTransitionTime":"2026-02-16T17:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.647304 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.647339 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.647348 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.647366 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.647376 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:07Z","lastTransitionTime":"2026-02-16T17:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.750364 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.750431 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.750446 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.750467 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.750484 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:07Z","lastTransitionTime":"2026-02-16T17:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.852797 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.853118 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.853185 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.853256 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.853318 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:07Z","lastTransitionTime":"2026-02-16T17:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.955723 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.955840 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.955881 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.955935 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:07 crc kubenswrapper[4870]: I0216 17:01:07.956010 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:07Z","lastTransitionTime":"2026-02-16T17:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.059031 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.059082 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.059092 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.059111 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.059123 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:08Z","lastTransitionTime":"2026-02-16T17:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.162337 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.162728 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.162869 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.162977 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.163142 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:08Z","lastTransitionTime":"2026-02-16T17:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.222428 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:08 crc kubenswrapper[4870]: E0216 17:01:08.222658 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.222731 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 09:40:14.611126095 +0000 UTC Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.237538 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.266801 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.266868 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.266886 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.266907 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.266920 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:08Z","lastTransitionTime":"2026-02-16T17:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.370292 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.370360 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.370382 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.370411 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.370427 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:08Z","lastTransitionTime":"2026-02-16T17:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.473081 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.473147 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.473158 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.473184 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.473199 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:08Z","lastTransitionTime":"2026-02-16T17:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.575810 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.575849 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.575861 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.575877 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.575887 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:08Z","lastTransitionTime":"2026-02-16T17:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.678285 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.678325 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.678337 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.678353 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.678362 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:08Z","lastTransitionTime":"2026-02-16T17:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.702684 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jjq54_52f144f1-d0b6-4871-a439-6aaf51304c4b/kube-multus/0.log" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.702745 4870 generic.go:334] "Generic (PLEG): container finished" podID="52f144f1-d0b6-4871-a439-6aaf51304c4b" containerID="2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e" exitCode=1 Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.702844 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jjq54" event={"ID":"52f144f1-d0b6-4871-a439-6aaf51304c4b","Type":"ContainerDied","Data":"2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e"} Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.703293 4870 scope.go:117] "RemoveContainer" containerID="2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.724317 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"535ee11c-6a2f-4974-acfc-59b6463aa0f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f3cdf90be2ee9e1b5b1eff2e81b1f42985106dfcba9a9aab0cc27bfa908a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89843858f314702b13c9436980c8c50c088761d78060e9e7b79182ef78cede81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89843858f314702b13c9436980c8c50c088761d78060e9e7b79182ef78cede81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.740536 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1daf779e-125b-4b65-a0a5-1d17de09d7f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a830b7770df5d6eb09cd233bf335c59c23fcc311afc7b2fb2df511ee4f7a1f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22759612b983e0c11f0aef2e271a252d00fa8bcfafabf5176d17d29ba811f485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7349c946be2d324b8b6393304de50fff9f89a1143c1d4249e2c8bfacb1df1251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd45a46ea1111cb66e4e31da8f8b3905d1d203b93f16247aab378df4eec5481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd45a46ea1111cb66e4e31da8f8b3905d1d203b93f16247aab378df4eec5481\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.756550 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.771056 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.781220 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.781275 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.781291 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.781320 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.781342 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:08Z","lastTransitionTime":"2026-02-16T17:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.792432 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.815808 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.843380 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.860251 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.874642 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.883657 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.883711 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.883727 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.883753 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.883769 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:08Z","lastTransitionTime":"2026-02-16T17:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.900041 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:01:07Z\\\",\\\"message\\\":\\\"2026-02-16T17:00:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_51f38232-ba70-4428-a541-1e41a42a0b6b\\\\n2026-02-16T17:00:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_51f38232-ba70-4428-a541-1e41a42a0b6b to /host/opt/cni/bin/\\\\n2026-02-16T17:00:22Z [verbose] multus-daemon started\\\\n2026-02-16T17:00:22Z [verbose] Readiness Indicator file check\\\\n2026-02-16T17:01:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.921980 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99cf6abd32755d5f425791ef831241917e3c00c3deec55aa0938228a2c1391c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368d8655cd750405c9ca8c279f2bdb55e6d700aa73caf439a5cbee5fd5e0f3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.942092 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.960579 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.975589 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.986986 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.987059 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.987075 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.987099 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.987117 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:08Z","lastTransitionTime":"2026-02-16T17:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:08 crc kubenswrapper[4870]: I0216 17:01:08.990227 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:08Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.013975 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:49Z\\\",\\\"message\\\":\\\"mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 17:00:49.152089 6562 services_controller.go:445] Built service openshift-kube-storage-version-migrator-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0216 17:00:49.152100 6562 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z]\\\\nI0216 17:00:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-drrrv_openshift-ovn-kubernetes(650bce90-73d6-474d-ab19-f50252dc8bc3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.032577 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.050563 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.065934 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13b0b83-258a-4545-b358-e08252dbbe87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zsfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.090160 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.090224 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.090235 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.090255 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.090267 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:09Z","lastTransitionTime":"2026-02-16T17:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.192599 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.192757 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.192849 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.192918 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.193006 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:09Z","lastTransitionTime":"2026-02-16T17:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.221922 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.221923 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:09 crc kubenswrapper[4870]: E0216 17:01:09.222089 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:09 crc kubenswrapper[4870]: E0216 17:01:09.222339 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.222660 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:09 crc kubenswrapper[4870]: E0216 17:01:09.223407 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.223067 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 13:58:18.309171045 +0000 UTC Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.296262 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.296338 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.296357 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.296391 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.296410 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:09Z","lastTransitionTime":"2026-02-16T17:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.399497 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.399586 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.399616 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.399648 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.399672 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:09Z","lastTransitionTime":"2026-02-16T17:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.503066 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.503392 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.503480 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.503605 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.503702 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:09Z","lastTransitionTime":"2026-02-16T17:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.606392 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.606452 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.606462 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.606481 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.606492 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:09Z","lastTransitionTime":"2026-02-16T17:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.708020 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jjq54_52f144f1-d0b6-4871-a439-6aaf51304c4b/kube-multus/0.log" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.708088 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jjq54" event={"ID":"52f144f1-d0b6-4871-a439-6aaf51304c4b","Type":"ContainerStarted","Data":"f967b353df1fd88fc135755f8ab9d8f0b1dedc8b005d09f4191f821eccb01bdb"} Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.708668 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.708704 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.708718 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.708734 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.708747 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:09Z","lastTransitionTime":"2026-02-16T17:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.728465 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.741833 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.756306 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13b0b83-258a-4545-b358-e08252dbbe87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zsfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.770881 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.783406 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"535ee11c-6a2f-4974-acfc-59b6463aa0f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f3cdf90be2ee9e1b5b1eff2e81b1f42985106dfcba9a9aab0cc27bfa908a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89843858f314702b13c9436980c8c50c088761d78060e9e7b79182ef78cede81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89843858f314702b13c9436980c8c50c088761d78060e9e7b79182ef78cede81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.796733 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1daf779e-125b-4b65-a0a5-1d17de09d7f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a830b7770df5d6eb09cd233bf335c59c23fcc311afc7b2fb2df511ee4f7a1f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22759612b983e0c11f0aef2e271a252d00fa8bcfafabf5176d17d29ba811f485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7349c946be2d324b8b6393304de50fff9f89a1143c1d4249e2c8bfacb1df1251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd45a46ea1111cb66e4e31da8f8b3905d1d203b93f16247aab378df4eec5481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd45a46ea1111cb66e4e31da8f8b3905d1d203b93f16247aab378df4eec5481\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.811314 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.811375 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.811386 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.811426 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.811444 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:09Z","lastTransitionTime":"2026-02-16T17:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.816244 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.833564 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.850731 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.874136 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.888129 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.902205 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.914630 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.914714 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.914729 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.914777 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.914792 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:09Z","lastTransitionTime":"2026-02-16T17:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.919043 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f967b353df1fd88fc135755f8ab9d8f0b1dedc8b005d09f4191f821eccb01bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:01:07Z\\\",\\\"message\\\":\\\"2026-02-16T17:00:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_51f38232-ba70-4428-a541-1e41a42a0b6b\\\\n2026-02-16T17:00:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_51f38232-ba70-4428-a541-1e41a42a0b6b to /host/opt/cni/bin/\\\\n2026-02-16T17:00:22Z [verbose] multus-daemon started\\\\n2026-02-16T17:00:22Z [verbose] Readiness Indicator file check\\\\n2026-02-16T17:01:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.933040 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99cf6abd32755d5f425791ef831241917e3c00c3deec55aa0938228a2c1391c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368d8655cd750405c9ca8c279f2bdb55e6d700aa73caf439a5cbee5fd5e0f3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.949601 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.965008 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.981557 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:09 crc kubenswrapper[4870]: I0216 17:01:09.992356 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:09Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.017488 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.017558 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.017578 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.017610 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.017632 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:10Z","lastTransitionTime":"2026-02-16T17:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.022535 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:49Z\\\",\\\"message\\\":\\\"mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 17:00:49.152089 6562 services_controller.go:445] Built service openshift-kube-storage-version-migrator-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0216 17:00:49.152100 6562 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z]\\\\nI0216 17:00:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-drrrv_openshift-ovn-kubernetes(650bce90-73d6-474d-ab19-f50252dc8bc3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:10Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.120604 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.120704 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.120721 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.120746 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.120795 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:10Z","lastTransitionTime":"2026-02-16T17:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.222145 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:10 crc kubenswrapper[4870]: E0216 17:01:10.222304 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.223989 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 09:01:54.106041509 +0000 UTC Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.224091 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.224121 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.224135 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.224157 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.224169 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:10Z","lastTransitionTime":"2026-02-16T17:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.327025 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.327078 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.327093 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.327115 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.327132 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:10Z","lastTransitionTime":"2026-02-16T17:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.430336 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.430408 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.430429 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.430456 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.430474 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:10Z","lastTransitionTime":"2026-02-16T17:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.534416 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.534486 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.534507 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.534537 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.534556 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:10Z","lastTransitionTime":"2026-02-16T17:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.637641 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.637701 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.637718 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.637739 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.637756 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:10Z","lastTransitionTime":"2026-02-16T17:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.739564 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.739612 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.739621 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.739637 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.739647 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:10Z","lastTransitionTime":"2026-02-16T17:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.843466 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.843527 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.843544 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.843573 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.843592 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:10Z","lastTransitionTime":"2026-02-16T17:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.947189 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.947278 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.947302 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.947336 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:10 crc kubenswrapper[4870]: I0216 17:01:10.947363 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:10Z","lastTransitionTime":"2026-02-16T17:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.051385 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.051491 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.051526 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.051564 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.051588 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:11Z","lastTransitionTime":"2026-02-16T17:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.155630 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.155709 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.155743 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.155800 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.155827 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:11Z","lastTransitionTime":"2026-02-16T17:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.223117 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.223154 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:11 crc kubenswrapper[4870]: E0216 17:01:11.223295 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.223171 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:11 crc kubenswrapper[4870]: E0216 17:01:11.223535 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:11 crc kubenswrapper[4870]: E0216 17:01:11.223818 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.224200 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 11:37:45.466522017 +0000 UTC Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.258709 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.258784 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.258805 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.258834 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.258850 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:11Z","lastTransitionTime":"2026-02-16T17:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.361717 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.361782 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.361792 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.361809 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.361834 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:11Z","lastTransitionTime":"2026-02-16T17:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.465110 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.465177 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.465189 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.465314 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.465356 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:11Z","lastTransitionTime":"2026-02-16T17:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.568170 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.568204 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.568242 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.568257 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.568266 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:11Z","lastTransitionTime":"2026-02-16T17:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.671164 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.671211 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.671223 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.671241 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.671257 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:11Z","lastTransitionTime":"2026-02-16T17:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.774258 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.774324 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.774348 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.774383 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.774407 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:11Z","lastTransitionTime":"2026-02-16T17:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.878103 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.878170 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.878188 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.878215 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.878235 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:11Z","lastTransitionTime":"2026-02-16T17:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.981922 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.981988 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.981999 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.982020 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:11 crc kubenswrapper[4870]: I0216 17:01:11.982032 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:11Z","lastTransitionTime":"2026-02-16T17:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.084936 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.085056 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.085075 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.085113 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.085133 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:12Z","lastTransitionTime":"2026-02-16T17:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.188459 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.188509 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.188521 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.188546 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.188562 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:12Z","lastTransitionTime":"2026-02-16T17:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.222529 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:12 crc kubenswrapper[4870]: E0216 17:01:12.222755 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.224546 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 07:15:19.857497001 +0000 UTC Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.293246 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.293345 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.293365 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.293398 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.293418 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:12Z","lastTransitionTime":"2026-02-16T17:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.396465 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.396620 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.396640 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.396669 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.396686 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:12Z","lastTransitionTime":"2026-02-16T17:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.499869 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.499943 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.500002 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.500031 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.500054 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:12Z","lastTransitionTime":"2026-02-16T17:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.604318 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.604456 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.604490 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.604533 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.604557 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:12Z","lastTransitionTime":"2026-02-16T17:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.708081 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.708237 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.708267 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.708390 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.708465 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:12Z","lastTransitionTime":"2026-02-16T17:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.812159 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.812227 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.812254 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.812273 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.812284 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:12Z","lastTransitionTime":"2026-02-16T17:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.916805 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.916870 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.916882 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.916905 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:12 crc kubenswrapper[4870]: I0216 17:01:12.916919 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:12Z","lastTransitionTime":"2026-02-16T17:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.020740 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.020864 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.020934 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.021053 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.021138 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:13Z","lastTransitionTime":"2026-02-16T17:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.125283 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.125365 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.125384 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.125414 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.125435 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:13Z","lastTransitionTime":"2026-02-16T17:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.222647 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.222908 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:13 crc kubenswrapper[4870]: E0216 17:01:13.223039 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.223113 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:13 crc kubenswrapper[4870]: E0216 17:01:13.223326 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:13 crc kubenswrapper[4870]: E0216 17:01:13.223444 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.224846 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 14:17:45.531514179 +0000 UTC Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.228643 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.228691 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.228707 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.228734 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.228745 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:13Z","lastTransitionTime":"2026-02-16T17:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.331732 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.332124 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.332150 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.332194 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.332234 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:13Z","lastTransitionTime":"2026-02-16T17:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.436192 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.436267 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.436285 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.436311 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.436330 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:13Z","lastTransitionTime":"2026-02-16T17:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.539400 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.539458 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.539481 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.539506 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.539520 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:13Z","lastTransitionTime":"2026-02-16T17:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.643493 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.643570 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.643582 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.643604 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.643623 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:13Z","lastTransitionTime":"2026-02-16T17:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.747305 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.747352 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.747364 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.747390 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.747402 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:13Z","lastTransitionTime":"2026-02-16T17:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.851305 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.851464 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.851492 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.851533 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.851559 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:13Z","lastTransitionTime":"2026-02-16T17:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.970496 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.970557 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.970570 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.970593 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:13 crc kubenswrapper[4870]: I0216 17:01:13.970607 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:13Z","lastTransitionTime":"2026-02-16T17:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.073171 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.073225 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.073239 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.073258 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.073273 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:14Z","lastTransitionTime":"2026-02-16T17:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.176355 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.176404 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.176422 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.176443 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.176456 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:14Z","lastTransitionTime":"2026-02-16T17:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.222552 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:14 crc kubenswrapper[4870]: E0216 17:01:14.222902 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.225756 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 08:59:21.530639159 +0000 UTC Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.279662 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.279720 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.279732 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.279751 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.279765 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:14Z","lastTransitionTime":"2026-02-16T17:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.281177 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.281230 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.281245 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.281261 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.281275 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:14Z","lastTransitionTime":"2026-02-16T17:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:14 crc kubenswrapper[4870]: E0216 17:01:14.299850 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:14Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.306222 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.306321 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.306342 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.306387 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.306408 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:14Z","lastTransitionTime":"2026-02-16T17:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:14 crc kubenswrapper[4870]: E0216 17:01:14.328892 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:14Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.338332 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.338493 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.338515 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.338549 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.338570 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:14Z","lastTransitionTime":"2026-02-16T17:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:14 crc kubenswrapper[4870]: E0216 17:01:14.358716 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:14Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.364510 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.364553 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.364565 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.364588 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.364604 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:14Z","lastTransitionTime":"2026-02-16T17:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:14 crc kubenswrapper[4870]: E0216 17:01:14.385412 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:14Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.389871 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.390014 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.390039 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.390067 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.390087 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:14Z","lastTransitionTime":"2026-02-16T17:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:14 crc kubenswrapper[4870]: E0216 17:01:14.413078 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:14Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:14 crc kubenswrapper[4870]: E0216 17:01:14.413467 4870 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.416484 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.416593 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.416662 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.416696 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.416773 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:14Z","lastTransitionTime":"2026-02-16T17:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.520756 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.520807 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.520821 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.520843 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.520857 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:14Z","lastTransitionTime":"2026-02-16T17:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.623322 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.623380 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.623390 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.623411 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.623422 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:14Z","lastTransitionTime":"2026-02-16T17:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.726594 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.726668 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.726686 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.726715 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.726736 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:14Z","lastTransitionTime":"2026-02-16T17:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.830361 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.830678 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.830762 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.830916 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.831080 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:14Z","lastTransitionTime":"2026-02-16T17:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.934019 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.934062 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.934071 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.934090 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:14 crc kubenswrapper[4870]: I0216 17:01:14.934101 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:14Z","lastTransitionTime":"2026-02-16T17:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.036835 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.037289 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.037452 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.037642 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.037818 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:15Z","lastTransitionTime":"2026-02-16T17:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.141446 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.141495 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.141506 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.141524 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.141534 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:15Z","lastTransitionTime":"2026-02-16T17:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.223228 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.224124 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:15 crc kubenswrapper[4870]: E0216 17:01:15.224264 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.224345 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:15 crc kubenswrapper[4870]: E0216 17:01:15.224430 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:15 crc kubenswrapper[4870]: E0216 17:01:15.224717 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.226145 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 21:13:56.713925061 +0000 UTC Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.245017 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.245079 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.245097 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.245134 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.245160 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:15Z","lastTransitionTime":"2026-02-16T17:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.348691 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.348750 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.348767 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.348789 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.348803 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:15Z","lastTransitionTime":"2026-02-16T17:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.452601 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.452678 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.452698 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.452735 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.452757 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:15Z","lastTransitionTime":"2026-02-16T17:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.556667 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.556722 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.556732 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.556752 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.556763 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:15Z","lastTransitionTime":"2026-02-16T17:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.659762 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.659835 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.659852 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.659880 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.659899 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:15Z","lastTransitionTime":"2026-02-16T17:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.763064 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.763141 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.763165 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.763187 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.763202 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:15Z","lastTransitionTime":"2026-02-16T17:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.866474 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.866544 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.866560 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.866585 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.866599 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:15Z","lastTransitionTime":"2026-02-16T17:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.970241 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.970336 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.970486 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.970524 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:15 crc kubenswrapper[4870]: I0216 17:01:15.970548 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:15Z","lastTransitionTime":"2026-02-16T17:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.073525 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.073616 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.073642 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.073674 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.073693 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:16Z","lastTransitionTime":"2026-02-16T17:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.177890 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.178012 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.178040 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.178071 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.178096 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:16Z","lastTransitionTime":"2026-02-16T17:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.222268 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:16 crc kubenswrapper[4870]: E0216 17:01:16.222566 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.226407 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 08:24:50.763348533 +0000 UTC Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.242069 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"535ee11c-6a2f-4974-acfc-59b6463aa0f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f3cdf90be2ee9e1b5b1eff2e81b1f42985106dfcba9a9aab0cc27bfa908a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89843858f314702b13c9436980c8c50c088761d78060e9e7b79182ef78cede81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89843858f314702b13c9436980c8c50c088761d78060e9e7b79182ef78cede81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:16Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.266045 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1daf779e-125b-4b65-a0a5-1d17de09d7f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a830b7770df5d6eb09cd233bf335c59c23fcc311afc7b2fb2df511ee4f7a1f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22759612b983e0c11f0aef2e271a252d00fa8bcfafabf5176d17d29ba811f485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7349c946be2d324b8b6393304de50fff9f89a1143c1d4249e2c8bfacb1df1251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd45a46ea1111cb66e4e31da8f8b3905d1d203b93f16247aab378df4eec5481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd45a46ea1111cb66e4e31da8f8b3905d1d203b93f16247aab378df4eec5481\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:16Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.282449 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.282505 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.282522 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.282548 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.282563 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:16Z","lastTransitionTime":"2026-02-16T17:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.284794 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:16Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.302869 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:16Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.319840 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:16Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.343311 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:16Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.366242 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:16Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.381252 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:16Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.386441 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.386509 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.386529 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.386556 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.386580 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:16Z","lastTransitionTime":"2026-02-16T17:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.396268 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:16Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.412070 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f967b353df1fd88fc135755f8ab9d8f0b1dedc8b005d09f4191f821eccb01bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:01:07Z\\\",\\\"message\\\":\\\"2026-02-16T17:00:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_51f38232-ba70-4428-a541-1e41a42a0b6b\\\\n2026-02-16T17:00:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_51f38232-ba70-4428-a541-1e41a42a0b6b to /host/opt/cni/bin/\\\\n2026-02-16T17:00:22Z [verbose] multus-daemon started\\\\n2026-02-16T17:00:22Z [verbose] Readiness Indicator file check\\\\n2026-02-16T17:01:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:16Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.424668 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99cf6abd32755d5f425791ef831241917e3c00c3deec55aa0938228a2c1391c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368d8655cd750405c9ca8c279f2bdb55e6d700aa73caf439a5cbee5fd5e0f3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:16Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.438608 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:16Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.454029 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:16Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.467638 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:16Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.483654 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:16Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.491071 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.491147 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.491160 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.491181 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.491194 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:16Z","lastTransitionTime":"2026-02-16T17:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.506787 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:49Z\\\",\\\"message\\\":\\\"mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 17:00:49.152089 6562 services_controller.go:445] Built service openshift-kube-storage-version-migrator-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0216 17:00:49.152100 6562 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z]\\\\nI0216 17:00:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-drrrv_openshift-ovn-kubernetes(650bce90-73d6-474d-ab19-f50252dc8bc3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:16Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.521760 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:16Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.534831 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:16Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.548124 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13b0b83-258a-4545-b358-e08252dbbe87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zsfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:16Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.593580 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.593646 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.593663 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.593684 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.593698 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:16Z","lastTransitionTime":"2026-02-16T17:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.696635 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.696673 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.696684 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.696699 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.696708 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:16Z","lastTransitionTime":"2026-02-16T17:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.799681 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.799736 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.799747 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.799767 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.799779 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:16Z","lastTransitionTime":"2026-02-16T17:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.903213 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.903375 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.903406 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.903442 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:16 crc kubenswrapper[4870]: I0216 17:01:16.903466 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:16Z","lastTransitionTime":"2026-02-16T17:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.006523 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.006594 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.006618 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.006651 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.006675 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:17Z","lastTransitionTime":"2026-02-16T17:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.109574 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.109628 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.109646 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.109671 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.109717 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:17Z","lastTransitionTime":"2026-02-16T17:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.212190 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.212240 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.212255 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.212275 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.212289 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:17Z","lastTransitionTime":"2026-02-16T17:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.222227 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.222257 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.222297 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:17 crc kubenswrapper[4870]: E0216 17:01:17.222418 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:17 crc kubenswrapper[4870]: E0216 17:01:17.222518 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:17 crc kubenswrapper[4870]: E0216 17:01:17.222599 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.227289 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 11:52:51.555423144 +0000 UTC Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.316163 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.316242 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.316262 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.316293 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.316313 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:17Z","lastTransitionTime":"2026-02-16T17:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.419766 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.419822 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.419832 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.419850 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.419863 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:17Z","lastTransitionTime":"2026-02-16T17:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.523073 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.523125 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.523168 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.523195 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.523211 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:17Z","lastTransitionTime":"2026-02-16T17:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.625438 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.625491 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.625503 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.625516 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.625526 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:17Z","lastTransitionTime":"2026-02-16T17:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.729013 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.729081 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.729099 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.729124 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.729148 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:17Z","lastTransitionTime":"2026-02-16T17:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.832264 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.832708 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.832749 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.832779 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.832798 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:17Z","lastTransitionTime":"2026-02-16T17:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.936700 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.936749 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.936762 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.936782 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:17 crc kubenswrapper[4870]: I0216 17:01:17.936796 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:17Z","lastTransitionTime":"2026-02-16T17:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.041046 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.041139 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.041206 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.041237 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.041262 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:18Z","lastTransitionTime":"2026-02-16T17:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.144772 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.144843 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.144860 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.144891 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.144909 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:18Z","lastTransitionTime":"2026-02-16T17:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.223118 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:18 crc kubenswrapper[4870]: E0216 17:01:18.223355 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.229482 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 19:16:05.19341502 +0000 UTC Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.247601 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.247672 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.247689 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.247718 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.247735 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:18Z","lastTransitionTime":"2026-02-16T17:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.350740 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.350788 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.350801 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.350820 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.350832 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:18Z","lastTransitionTime":"2026-02-16T17:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.453864 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.453936 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.454003 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.454031 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.454049 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:18Z","lastTransitionTime":"2026-02-16T17:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.556749 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.556816 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.556829 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.556852 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.556883 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:18Z","lastTransitionTime":"2026-02-16T17:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.660622 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.660713 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.660730 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.660759 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.660778 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:18Z","lastTransitionTime":"2026-02-16T17:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.764173 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.764250 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.764270 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.764301 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.764320 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:18Z","lastTransitionTime":"2026-02-16T17:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.868470 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.868564 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.868591 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.868628 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.868652 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:18Z","lastTransitionTime":"2026-02-16T17:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.972593 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.972672 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.972682 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.972700 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:18 crc kubenswrapper[4870]: I0216 17:01:18.972710 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:18Z","lastTransitionTime":"2026-02-16T17:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.075779 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.075843 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.075859 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.075880 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.075892 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:19Z","lastTransitionTime":"2026-02-16T17:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.179258 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.179342 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.179366 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.179401 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.179426 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:19Z","lastTransitionTime":"2026-02-16T17:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.221863 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.221930 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.222125 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:19 crc kubenswrapper[4870]: E0216 17:01:19.222217 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:19 crc kubenswrapper[4870]: E0216 17:01:19.222323 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:19 crc kubenswrapper[4870]: E0216 17:01:19.222462 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.230156 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:16:20.048783532 +0000 UTC Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.282762 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.282832 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.282855 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.282891 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.282914 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:19Z","lastTransitionTime":"2026-02-16T17:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.387219 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.387339 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.387381 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.387407 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.387419 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:19Z","lastTransitionTime":"2026-02-16T17:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.490975 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.491041 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.491054 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.491075 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.491089 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:19Z","lastTransitionTime":"2026-02-16T17:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.593362 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.593409 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.593425 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.593449 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.593464 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:19Z","lastTransitionTime":"2026-02-16T17:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.696860 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.696912 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.696931 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.696993 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.697012 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:19Z","lastTransitionTime":"2026-02-16T17:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.800122 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.800240 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.800260 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.800286 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.800305 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:19Z","lastTransitionTime":"2026-02-16T17:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.903697 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.903770 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.903786 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.903807 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:19 crc kubenswrapper[4870]: I0216 17:01:19.903820 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:19Z","lastTransitionTime":"2026-02-16T17:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.007688 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.007753 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.007770 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.007798 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.007815 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:20Z","lastTransitionTime":"2026-02-16T17:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.110805 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.110872 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.110885 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.110905 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.110918 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:20Z","lastTransitionTime":"2026-02-16T17:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.214457 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.214505 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.214514 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.214538 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.214552 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:20Z","lastTransitionTime":"2026-02-16T17:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.222470 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:20 crc kubenswrapper[4870]: E0216 17:01:20.222922 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.223159 4870 scope.go:117] "RemoveContainer" containerID="3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.230245 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 08:26:04.772290656 +0000 UTC Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.236667 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.236844 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.236886 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.236917 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:20 crc kubenswrapper[4870]: E0216 17:01:20.236970 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.236912521 +0000 UTC m=+148.720376905 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:01:20 crc kubenswrapper[4870]: E0216 17:01:20.237069 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:01:20 crc kubenswrapper[4870]: E0216 17:01:20.237096 4870 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:01:20 crc kubenswrapper[4870]: E0216 17:01:20.237110 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:01:20 crc kubenswrapper[4870]: E0216 17:01:20.237132 4870 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.237150 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:20 crc kubenswrapper[4870]: E0216 17:01:20.237166 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.237149828 +0000 UTC m=+148.720614222 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:01:20 crc kubenswrapper[4870]: E0216 17:01:20.237273 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.237256051 +0000 UTC m=+148.720720625 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:01:20 crc kubenswrapper[4870]: E0216 17:01:20.237221 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:01:20 crc kubenswrapper[4870]: E0216 17:01:20.237336 4870 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:01:20 crc kubenswrapper[4870]: E0216 17:01:20.237354 4870 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:01:20 crc kubenswrapper[4870]: E0216 17:01:20.237362 4870 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:01:20 crc kubenswrapper[4870]: E0216 17:01:20.237444 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.237411865 +0000 UTC m=+148.720876459 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:01:20 crc kubenswrapper[4870]: E0216 17:01:20.237469 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.237455476 +0000 UTC m=+148.720920100 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.317185 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.317236 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.317252 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.317274 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.317294 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:20Z","lastTransitionTime":"2026-02-16T17:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.419887 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.420271 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.420293 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.420319 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.420337 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:20Z","lastTransitionTime":"2026-02-16T17:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.522712 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.522753 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.522772 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.522795 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.522810 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:20Z","lastTransitionTime":"2026-02-16T17:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.626781 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.626832 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.626843 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.626862 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.626875 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:20Z","lastTransitionTime":"2026-02-16T17:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.730378 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.730445 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.730463 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.730489 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.730504 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:20Z","lastTransitionTime":"2026-02-16T17:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.756730 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovnkube-controller/2.log" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.759879 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerStarted","Data":"5b5256536b28a974ba56223677edeff840dc94d3e8dbbbe1c03b09c1e82f6ffb"} Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.760448 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.774697 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.793481 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.815726 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f967b353df1fd88fc135755f8ab9d8f0b1dedc8b005d09f4191f821eccb01bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:01:07Z\\\",\\\"message\\\":\\\"2026-02-16T17:00:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_51f38232-ba70-4428-a541-1e41a42a0b6b\\\\n2026-02-16T17:00:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_51f38232-ba70-4428-a541-1e41a42a0b6b to /host/opt/cni/bin/\\\\n2026-02-16T17:00:22Z [verbose] multus-daemon started\\\\n2026-02-16T17:00:22Z [verbose] Readiness Indicator file check\\\\n2026-02-16T17:01:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.828708 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99cf6abd32755d5f425791ef831241917e3c00c3deec55aa0938228a2c1391c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368d8655cd750405c9ca8c279f2bdb55e6d700aa73caf439a5cbee5fd5e0f3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.832758 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.832809 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.832819 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.832841 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.832854 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:20Z","lastTransitionTime":"2026-02-16T17:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.852316 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.867459 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.881748 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.893746 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.914700 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5256536b28a974ba56223677edeff840dc94d3e8dbbbe1c03b09c1e82f6ffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:49Z\\\",\\\"message\\\":\\\"mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 17:00:49.152089 6562 services_controller.go:445] Built service openshift-kube-storage-version-migrator-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0216 17:00:49.152100 6562 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z]\\\\nI0216 17:00:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.935089 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.935137 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.935150 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.935179 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.935191 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:20Z","lastTransitionTime":"2026-02-16T17:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.937324 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.950250 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.961043 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13b0b83-258a-4545-b358-e08252dbbe87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zsfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.972741 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:20 crc kubenswrapper[4870]: I0216 17:01:20.983167 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1daf779e-125b-4b65-a0a5-1d17de09d7f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a830b7770df5d6eb09cd233bf335c59c23fcc311afc7b2fb2df511ee4f7a1f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22759612b983e0c11f0aef2e271a252d00fa8bcfafabf5176d17d29ba811f485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7349c946be2d324b8b6393304de50fff9f89a1143c1d4249e2c8bfacb1df1251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd45a46ea1111cb66e4e31da8f8b3905d1d203b93f16247aab378df4eec5481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd45a46ea1111cb66e4e31da8f8b3905d1d203b93f16247aab378df4eec5481\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:20Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.011067 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.026923 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.037912 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.037993 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.038011 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.038033 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.038048 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:21Z","lastTransitionTime":"2026-02-16T17:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.041225 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.055652 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.067698 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"535ee11c-6a2f-4974-acfc-59b6463aa0f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f3cdf90be2ee9e1b5b1eff2e81b1f42985106dfcba9a9aab0cc27bfa908a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89843858f314702b13c9436980c8c50c088761d78060e9e7b79182ef78cede81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89843858f314702b13c9436980c8c50c088761d78060e9e7b79182ef78cede81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.140919 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.140983 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.140995 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.141016 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.141032 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:21Z","lastTransitionTime":"2026-02-16T17:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.222145 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.222194 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.222168 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:21 crc kubenswrapper[4870]: E0216 17:01:21.222342 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:21 crc kubenswrapper[4870]: E0216 17:01:21.222469 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:21 crc kubenswrapper[4870]: E0216 17:01:21.222547 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.231126 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:20:03.313927098 +0000 UTC Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.244144 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.244196 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.244209 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.244227 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.244238 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:21Z","lastTransitionTime":"2026-02-16T17:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.346381 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.346499 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.346521 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.346545 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.346563 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:21Z","lastTransitionTime":"2026-02-16T17:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.449040 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.449084 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.449096 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.449114 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.449128 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:21Z","lastTransitionTime":"2026-02-16T17:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.553119 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.553188 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.553211 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.553244 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.553268 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:21Z","lastTransitionTime":"2026-02-16T17:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.656579 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.656658 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.656679 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.656706 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.656725 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:21Z","lastTransitionTime":"2026-02-16T17:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.759290 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.759330 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.759338 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.759353 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.759363 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:21Z","lastTransitionTime":"2026-02-16T17:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.764818 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovnkube-controller/3.log" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.765674 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovnkube-controller/2.log" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.769269 4870 generic.go:334] "Generic (PLEG): container finished" podID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerID="5b5256536b28a974ba56223677edeff840dc94d3e8dbbbe1c03b09c1e82f6ffb" exitCode=1 Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.769314 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerDied","Data":"5b5256536b28a974ba56223677edeff840dc94d3e8dbbbe1c03b09c1e82f6ffb"} Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.769372 4870 scope.go:117] "RemoveContainer" containerID="3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.769905 4870 scope.go:117] "RemoveContainer" containerID="5b5256536b28a974ba56223677edeff840dc94d3e8dbbbe1c03b09c1e82f6ffb" Feb 16 17:01:21 crc kubenswrapper[4870]: E0216 17:01:21.770089 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-drrrv_openshift-ovn-kubernetes(650bce90-73d6-474d-ab19-f50252dc8bc3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.792275 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.805685 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.820966 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.838486 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f967b353df1fd88fc135755f8ab9d8f0b1dedc8b005d09f4191f821eccb01bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:01:07Z\\\",\\\"message\\\":\\\"2026-02-16T17:00:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_51f38232-ba70-4428-a541-1e41a42a0b6b\\\\n2026-02-16T17:00:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_51f38232-ba70-4428-a541-1e41a42a0b6b to /host/opt/cni/bin/\\\\n2026-02-16T17:00:22Z [verbose] multus-daemon started\\\\n2026-02-16T17:00:22Z [verbose] Readiness Indicator file check\\\\n2026-02-16T17:01:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.856798 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99cf6abd32755d5f425791ef831241917e3c00c3deec55aa0938228a2c1391c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368d8655cd750405c9ca8c279f2bdb55e6d700aa73caf439a5cbee5fd5e0f3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.863980 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.864490 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.864883 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.864969 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.864984 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:21Z","lastTransitionTime":"2026-02-16T17:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.877261 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.891295 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.904443 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.915062 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.932443 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5256536b28a974ba56223677edeff840dc94d3e8dbbbe1c03b09c1e82f6ffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3eb6da58060464eccb49d1b4a93055b64469a1ae1c21780bb781f657fbaf686f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:00:49Z\\\",\\\"message\\\":\\\"mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 17:00:49.152089 6562 services_controller.go:445] Built service openshift-kube-storage-version-migrator-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0216 17:00:49.152100 6562 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:00:49Z is after 2025-08-24T17:21:41Z]\\\\nI0216 17:00:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5256536b28a974ba56223677edeff840dc94d3e8dbbbe1c03b09c1e82f6ffb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:01:21Z\\\",\\\"message\\\":\\\"github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 17:01:21.201178 7033 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 17:01:21.201249 7033 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 17:01:21.201292 7033 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 17:01:21.201335 7033 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 17:01:21.201340 7033 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:01:21.201397 7033 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 17:01:21.201398 7033 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 17:01:21.201437 7033 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 17:01:21.201366 7033 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 17:01:21.201462 7033 factory.go:656] Stopping watch factory\\\\nI0216 17:01:21.201475 7033 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 17:01:21.201542 7033 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 17:01:21.201382 7033 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 17:01:21.201500 7033 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.943854 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.956114 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.968062 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13b0b83-258a-4545-b358-e08252dbbe87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zsfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.968523 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.968572 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.968581 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.968600 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.968610 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:21Z","lastTransitionTime":"2026-02-16T17:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.978620 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"535ee11c-6a2f-4974-acfc-59b6463aa0f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f3cdf90be2ee9e1b5b1eff2e81b1f42985106dfcba9a9aab0cc27bfa908a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89843858f314702b13c9436980c8c50c088761d78060e9e7b79182ef78cede81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89843858f314702b13c9436980c8c50c088761d78060e9e7b79182ef78cede81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:21 crc kubenswrapper[4870]: I0216 17:01:21.988746 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1daf779e-125b-4b65-a0a5-1d17de09d7f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a830b7770df5d6eb09cd233bf335c59c23fcc311afc7b2fb2df511ee4f7a1f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22759612b983e0c11f0aef2e271a252d00fa8bcfafabf5176d17d29ba811f485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7349c946be2d324b8b6393304de50fff9f89a1143c1d4249e2c8bfacb1df1251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd45a46ea1111cb66e4e31da8f8b3905d1d203b93f16247aab378df4eec5481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd45a46ea1111cb66e4e31da8f8b3905d1d203b93f16247aab378df4eec5481\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.000467 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:21Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.012926 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.025455 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.038885 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.071552 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.071608 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.071623 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.071647 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.071665 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:22Z","lastTransitionTime":"2026-02-16T17:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.174419 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.174495 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.174513 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.174547 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.174568 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:22Z","lastTransitionTime":"2026-02-16T17:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.222130 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:22 crc kubenswrapper[4870]: E0216 17:01:22.222549 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.231846 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 07:09:16.598564259 +0000 UTC Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.277938 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.278045 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.278067 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.278106 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.278131 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:22Z","lastTransitionTime":"2026-02-16T17:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.381196 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.381291 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.381315 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.381351 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.381374 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:22Z","lastTransitionTime":"2026-02-16T17:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.484864 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.484936 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.485004 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.485040 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.485064 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:22Z","lastTransitionTime":"2026-02-16T17:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.587623 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.587657 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.587666 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.587680 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.587691 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:22Z","lastTransitionTime":"2026-02-16T17:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.690280 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.690327 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.690338 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.690359 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.690372 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:22Z","lastTransitionTime":"2026-02-16T17:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.774184 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovnkube-controller/3.log" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.777528 4870 scope.go:117] "RemoveContainer" containerID="5b5256536b28a974ba56223677edeff840dc94d3e8dbbbe1c03b09c1e82f6ffb" Feb 16 17:01:22 crc kubenswrapper[4870]: E0216 17:01:22.777786 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-drrrv_openshift-ovn-kubernetes(650bce90-73d6-474d-ab19-f50252dc8bc3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.793927 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1daf779e-125b-4b65-a0a5-1d17de09d7f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a830b7770df5d6eb09cd233bf335c59c23fcc311afc7b2fb2df511ee4f7a1f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22759612b983e0c11f0aef2e271a252d00fa8bcfafabf5176d17d29ba811f485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7349c946be2d324b8b6393304de50fff9f89a1143c1d4249e2c8bfacb1df1251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cd45a46ea1111cb66e4e31da8f8b3905d1d203b93f16247aab378df4eec5481\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cd45a46ea1111cb66e4e31da8f8b3905d1d203b93f16247aab378df4eec5481\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.794238 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.794275 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.794285 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.794304 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.794315 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:22Z","lastTransitionTime":"2026-02-16T17:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.810200 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.825994 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.843438 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.869681 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-995kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41366745-32be-4762-84c3-25c4b4e1732b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af89d3a5b641064148db513a2302417ce907af2d8393e72c3ff47b55f9ded188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc54db519fe88a9d4abd8cb5084dd4637722c5eb02d340e78a6820e60d6f2fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0972a1af95bd3370c2298e4094d8c59f3ccaf18c7a36699449dcebfdb0507ab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b014fa86f1cf20144ed88c810d0e40d7fe89a627f897b5d8ed17e2b80ea30810\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1ff23888d9ad88b3a7c32d4d004c9a1faac4f96ce60b283a6e31923eb890790\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b096e7cfa1e6f41698feec31d71f60a8dde7ff6bbd3ca5da42140e293f673f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f45cc56b7170ac12ce22af589ece2e6ae7f094309290890242f78088ba0029d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-444l7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-995kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.884298 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"535ee11c-6a2f-4974-acfc-59b6463aa0f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25f3cdf90be2ee9e1b5b1eff2e81b1f42985106dfcba9a9aab0cc27bfa908a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89843858f314702b13c9436980c8c50c088761d78060e9e7b79182ef78cede81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89843858f314702b13c9436980c8c50c088761d78060e9e7b79182ef78cede81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.897660 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.897699 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.897718 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.897742 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.897761 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:22Z","lastTransitionTime":"2026-02-16T17:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.901724 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26a32ba26ac2efae0599a2cdcadf68fe4f6ec3652adb0b73c9d6e64432c1bbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.913606 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9zmm6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b090777-a023-4789-877e-55d3f30e65f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bce8b5c7aae754336b60ff834e988d61c1715f4d939bd9370614f6d5f03bc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcvcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9zmm6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.929387 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jjq54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f144f1-d0b6-4871-a439-6aaf51304c4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f967b353df1fd88fc135755f8ab9d8f0b1dedc8b005d09f4191f821eccb01bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:01:07Z\\\",\\\"message\\\":\\\"2026-02-16T17:00:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_51f38232-ba70-4428-a541-1e41a42a0b6b\\\\n2026-02-16T17:00:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_51f38232-ba70-4428-a541-1e41a42a0b6b to /host/opt/cni/bin/\\\\n2026-02-16T17:00:22Z [verbose] multus-daemon started\\\\n2026-02-16T17:00:22Z [verbose] Readiness Indicator file check\\\\n2026-02-16T17:01:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25h7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jjq54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.942731 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28571c8d-03d1-4c81-9d6d-23328c859237\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99cf6abd32755d5f425791ef831241917e3c00c3deec55aa0938228a2c1391c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368d8655cd750405c9ca8c279f2bdb55e6d700aa73caf439a5cbee5fd5e0f3ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kv64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snc94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.974746 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5e86c61-0ddd-40ef-9eb3-0797a995d595\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61291bd7be3e62195b62246b7007af5dbf6436e813d7b1d0acd8ca2ec0138dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eca42e2552a12eef1e0e94ffd69f8e55eba95989902257f4e763c0aa5b61388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a140a6ad552347a2296c1d981fbe0250977abe1e23de70f5306218b975ff083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56427048d44af00ea576de6c58c604b21c973fd6cc4c7d32c40c98b7b8689425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b7d844b188ed2dee608d61439d0d8fb821dd339ca3fc172d4a7bdb4f71dc8b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b540fc38884e6d771341c55ce1dfd76b846a6a342701bb2b56ba31e139ac04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://495132556b6adc7e4c9942881d119b2b73643dbc12f66b092fef692dcae36aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44774f67dbf02a7a9ff28660d426209612d21df807a5be21594689fd333d7059\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:22 crc kubenswrapper[4870]: I0216 17:01:22.989311 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93a8962-0aee-4190-bffa-8745334b3bb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135a632043f05b0ebd7f90c5cfa9bf2614121a1c1a4033ecb638cb71a0bd0576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae518dab8462f4e878b26fbc1f99530bdaf090f287183d4485c4205760e706d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea8d68056ac2cdd124954b32f9f99e874a8ab65673aa31c560d9b4a59d2717\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:22Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.000456 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.000508 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.000533 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.000553 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.000576 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:23Z","lastTransitionTime":"2026-02-16T17:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.005290 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5f99c75bdb48ddd6b6b6af68735e1f9d7d782d136d6683b8bdeecf0a34804\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.019510 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bhb7f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ecaee09-f493-4280-9dca-5c0b127c137a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c0a1938a726c756e4cc84d279b475c3ffbc82fe9f9b64bcdd4cce6a9372bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bhb7f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.038230 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650bce90-73d6-474d-ab19-f50252dc8bc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5256536b28a974ba56223677edeff840dc94d3e8dbbbe1c03b09c1e82f6ffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5256536b28a974ba56223677edeff840dc94d3e8dbbbe1c03b09c1e82f6ffb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:01:21Z\\\",\\\"message\\\":\\\"github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 17:01:21.201178 7033 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 17:01:21.201249 7033 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 17:01:21.201292 7033 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 17:01:21.201335 7033 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 17:01:21.201340 7033 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 17:01:21.201397 7033 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 17:01:21.201398 7033 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 17:01:21.201437 7033 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 17:01:21.201366 7033 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 17:01:21.201462 7033 factory.go:656] Stopping watch factory\\\\nI0216 17:01:21.201475 7033 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 17:01:21.201542 7033 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 17:01:21.201382 7033 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 17:01:21.201500 7033 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:01:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-drrrv_openshift-ovn-kubernetes(650bce90-73d6-474d-ab19-f50252dc8bc3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmshf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-drrrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.060572 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcb39c2a-789a-40d5-b431-9d436bcc54dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T16:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:13Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771261213\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771261213\\\\\\\\\\\\\\\" (2026-02-16 16:00:12 +0000 UTC to 2027-02-16 16:00:12 +0000 UTC (now=2026-02-16 17:00:13.171136764 +0000 UTC))\\\\\\\"\\\\nI0216 17:00:13.171179 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 17:00:13.171201 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 17:00:13.171225 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171246 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 17:00:13.171284 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2850277618/tls.crt::/tmp/serving-cert-2850277618/tls.key\\\\\\\"\\\\nI0216 17:00:13.171368 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0216 17:00:13.171402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 17:00:13.173479 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 17:00:13.173485 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 17:00:13.173556 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:00:13.173567 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0216 17:00:13.177029 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T16:59:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T16:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T16:59:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.073315 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e693e8-f31b-4cc5-b521-0f37451019ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75b27a9f7b90d37e6469a6e7155b07f9ef20b0f8bffd65c4143a0da7ba37c364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2bmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cgzwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.085202 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d13b0b83-258a-4545-b358-e08252dbbe87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftth5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T17:00:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zsfxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.097596 4870 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:00:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b00146a250a4378a6c2ceb34e76390aed7dbf8c6abfa499b7f6faef04c3c062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00d7bde3b9243ec7b5815fcce8b842497f477da7451fdd531a9928d5f38571f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:00:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:23Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.103582 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.103620 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.103639 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.103660 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.103672 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:23Z","lastTransitionTime":"2026-02-16T17:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.207032 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.207095 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.207117 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.207151 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.207176 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:23Z","lastTransitionTime":"2026-02-16T17:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.222031 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.222042 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:23 crc kubenswrapper[4870]: E0216 17:01:23.222320 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:23 crc kubenswrapper[4870]: E0216 17:01:23.222491 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.222065 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:23 crc kubenswrapper[4870]: E0216 17:01:23.222655 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.232544 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:51:34.733170586 +0000 UTC Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.310701 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.310760 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.310773 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.310793 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.310806 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:23Z","lastTransitionTime":"2026-02-16T17:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.413839 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.413879 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.413904 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.413921 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.413931 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:23Z","lastTransitionTime":"2026-02-16T17:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.517225 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.517725 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.517746 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.517770 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.517791 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:23Z","lastTransitionTime":"2026-02-16T17:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.620927 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.621047 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.621072 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.621106 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.621131 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:23Z","lastTransitionTime":"2026-02-16T17:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.724411 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.724501 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.724523 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.724552 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.724572 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:23Z","lastTransitionTime":"2026-02-16T17:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.827545 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.827615 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.827633 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.827664 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.827687 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:23Z","lastTransitionTime":"2026-02-16T17:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.930862 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.930913 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.930924 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.930958 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:23 crc kubenswrapper[4870]: I0216 17:01:23.930971 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:23Z","lastTransitionTime":"2026-02-16T17:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.034820 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.035237 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.035267 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.035303 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.035322 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:24Z","lastTransitionTime":"2026-02-16T17:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.139492 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.139560 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.139579 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.139609 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.139629 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:24Z","lastTransitionTime":"2026-02-16T17:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.222846 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:24 crc kubenswrapper[4870]: E0216 17:01:24.223178 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.233353 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 18:29:08.263716592 +0000 UTC Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.243737 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.243798 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.243817 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.243847 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.243866 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:24Z","lastTransitionTime":"2026-02-16T17:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.346664 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.346775 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.346808 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.346845 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.346869 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:24Z","lastTransitionTime":"2026-02-16T17:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.450078 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.450142 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.450153 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.450176 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.450189 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:24Z","lastTransitionTime":"2026-02-16T17:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.554335 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.554405 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.554430 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.554468 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.554494 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:24Z","lastTransitionTime":"2026-02-16T17:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.657539 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.657603 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.657621 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.657648 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.657667 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:24Z","lastTransitionTime":"2026-02-16T17:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.754093 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.754155 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.754167 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.754187 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.754198 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:24Z","lastTransitionTime":"2026-02-16T17:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:24 crc kubenswrapper[4870]: E0216 17:01:24.775695 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.781584 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.781657 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.781669 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.781690 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.781726 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:24Z","lastTransitionTime":"2026-02-16T17:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:24 crc kubenswrapper[4870]: E0216 17:01:24.804209 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.808458 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.808493 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.808506 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.808524 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.808535 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:24Z","lastTransitionTime":"2026-02-16T17:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:24 crc kubenswrapper[4870]: E0216 17:01:24.823463 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.828874 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.828931 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.828942 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.828980 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.828994 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:24Z","lastTransitionTime":"2026-02-16T17:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:24 crc kubenswrapper[4870]: E0216 17:01:24.847718 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.852267 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.852338 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.852349 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.852371 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.852383 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:24Z","lastTransitionTime":"2026-02-16T17:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:24 crc kubenswrapper[4870]: E0216 17:01:24.865912 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:01:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"580d9830-c978-4311-ba24-4b5d59c3355c\\\",\\\"systemUUID\\\":\\\"dab7b9c4-d71b-440c-b254-67ed578dcf0e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T17:01:24Z is after 2025-08-24T17:21:41Z" Feb 16 17:01:24 crc kubenswrapper[4870]: E0216 17:01:24.866049 4870 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.867598 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.867668 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.867684 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.867702 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.867716 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:24Z","lastTransitionTime":"2026-02-16T17:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.970625 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.970667 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.970676 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.970695 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:24 crc kubenswrapper[4870]: I0216 17:01:24.970707 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:24Z","lastTransitionTime":"2026-02-16T17:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.073938 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.074035 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.074049 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.074071 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.074090 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:25Z","lastTransitionTime":"2026-02-16T17:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.178107 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.178164 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.178176 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.178199 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.178212 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:25Z","lastTransitionTime":"2026-02-16T17:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.222997 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.223097 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.223445 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:25 crc kubenswrapper[4870]: E0216 17:01:25.223608 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:25 crc kubenswrapper[4870]: E0216 17:01:25.223708 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:25 crc kubenswrapper[4870]: E0216 17:01:25.223735 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.233933 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 13:31:12.847664268 +0000 UTC Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.281581 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.281621 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.281630 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.281645 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.281656 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:25Z","lastTransitionTime":"2026-02-16T17:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.384791 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.384872 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.384898 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.384932 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.384990 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:25Z","lastTransitionTime":"2026-02-16T17:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.487776 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.487817 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.487827 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.487844 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.487857 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:25Z","lastTransitionTime":"2026-02-16T17:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.590883 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.590927 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.590936 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.590966 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.590977 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:25Z","lastTransitionTime":"2026-02-16T17:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.693694 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.693746 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.693758 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.693778 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.693789 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:25Z","lastTransitionTime":"2026-02-16T17:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.797013 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.797108 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.797127 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.797155 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.797174 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:25Z","lastTransitionTime":"2026-02-16T17:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.901054 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.901115 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.901127 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.901147 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:25 crc kubenswrapper[4870]: I0216 17:01:25.901160 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:25Z","lastTransitionTime":"2026-02-16T17:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.004502 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.004574 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.004586 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.004608 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.004620 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:26Z","lastTransitionTime":"2026-02-16T17:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.107877 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.108005 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.108027 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.108053 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.108072 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:26Z","lastTransitionTime":"2026-02-16T17:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.212496 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.212552 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.212564 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.212587 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.212600 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:26Z","lastTransitionTime":"2026-02-16T17:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.222322 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:26 crc kubenswrapper[4870]: E0216 17:01:26.222443 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.234469 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 23:46:57.125782013 +0000 UTC Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.315264 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=70.315239802 podStartE2EDuration="1m10.315239802s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:26.296664748 +0000 UTC m=+90.780129152" watchObservedRunningTime="2026-02-16 17:01:26.315239802 +0000 UTC m=+90.798704186" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.317469 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.317532 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.317556 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.317582 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.317603 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:26Z","lastTransitionTime":"2026-02-16T17:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.332450 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=70.332425907 podStartE2EDuration="1m10.332425907s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:26.316286392 +0000 UTC m=+90.799750776" watchObservedRunningTime="2026-02-16 17:01:26.332425907 +0000 UTC m=+90.815890331" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.347349 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bhb7f" podStartSLOduration=70.347325867 podStartE2EDuration="1m10.347325867s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:26.346495283 +0000 UTC m=+90.829959667" watchObservedRunningTime="2026-02-16 17:01:26.347325867 +0000 UTC m=+90.830790251" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.387761 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podStartSLOduration=70.387740196 podStartE2EDuration="1m10.387740196s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:26.375147231 +0000 UTC m=+90.858611615" watchObservedRunningTime="2026-02-16 17:01:26.387740196 +0000 UTC m=+90.871204580" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.419714 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.420026 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.420094 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.420169 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.420233 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:26Z","lastTransitionTime":"2026-02-16T17:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.432624 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-995kl" podStartSLOduration=70.43259527 podStartE2EDuration="1m10.43259527s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:26.423868164 +0000 UTC m=+90.907332548" watchObservedRunningTime="2026-02-16 17:01:26.43259527 +0000 UTC m=+90.916059654" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.445452 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=18.445428982 podStartE2EDuration="18.445428982s" podCreationTimestamp="2026-02-16 17:01:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:26.445216356 +0000 UTC m=+90.928680760" watchObservedRunningTime="2026-02-16 17:01:26.445428982 +0000 UTC m=+90.928893386" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.458410 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=36.458390738 podStartE2EDuration="36.458390738s" podCreationTimestamp="2026-02-16 17:00:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:26.457825342 +0000 UTC m=+90.941289726" watchObservedRunningTime="2026-02-16 17:01:26.458390738 +0000 UTC m=+90.941855122" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.503401 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snc94" podStartSLOduration=69.503376586 podStartE2EDuration="1m9.503376586s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:26.502724177 +0000 UTC m=+90.986188561" watchObservedRunningTime="2026-02-16 17:01:26.503376586 +0000 UTC m=+90.986840970" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.522242 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.522281 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.522290 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.522306 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.522316 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:26Z","lastTransitionTime":"2026-02-16T17:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.531891 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=69.531874699 podStartE2EDuration="1m9.531874699s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:26.53048521 +0000 UTC m=+91.013949604" watchObservedRunningTime="2026-02-16 17:01:26.531874699 +0000 UTC m=+91.015339083" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.557002 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-9zmm6" podStartSLOduration=70.556975087 podStartE2EDuration="1m10.556975087s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:26.556613107 +0000 UTC m=+91.040077491" watchObservedRunningTime="2026-02-16 17:01:26.556975087 +0000 UTC m=+91.040439471" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.625520 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.625599 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.625639 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.625666 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.625680 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:26Z","lastTransitionTime":"2026-02-16T17:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.728532 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.728644 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.728674 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.728709 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.728734 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:26Z","lastTransitionTime":"2026-02-16T17:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.832147 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.832204 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.832216 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.832236 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.832249 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:26Z","lastTransitionTime":"2026-02-16T17:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.935108 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.935160 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.935169 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.935190 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:26 crc kubenswrapper[4870]: I0216 17:01:26.935201 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:26Z","lastTransitionTime":"2026-02-16T17:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.038055 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.038116 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.038137 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.038164 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.038181 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:27Z","lastTransitionTime":"2026-02-16T17:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.140655 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.140693 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.140704 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.140721 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.140733 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:27Z","lastTransitionTime":"2026-02-16T17:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.222627 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.222628 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:27 crc kubenswrapper[4870]: E0216 17:01:27.222814 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.222635 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:27 crc kubenswrapper[4870]: E0216 17:01:27.223026 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:27 crc kubenswrapper[4870]: E0216 17:01:27.223232 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.234662 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:59:40.244484055 +0000 UTC Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.244307 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.244372 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.244391 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.244423 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.244441 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:27Z","lastTransitionTime":"2026-02-16T17:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.347437 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.347478 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.347488 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.347507 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.347519 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:27Z","lastTransitionTime":"2026-02-16T17:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.450686 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.450959 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.451021 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.451125 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.451200 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:27Z","lastTransitionTime":"2026-02-16T17:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.554224 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.554294 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.554308 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.554330 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.554355 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:27Z","lastTransitionTime":"2026-02-16T17:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.656878 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.656993 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.657016 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.657042 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.657064 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:27Z","lastTransitionTime":"2026-02-16T17:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.760530 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.760611 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.760637 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.760666 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.760687 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:27Z","lastTransitionTime":"2026-02-16T17:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.865152 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.865937 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.866025 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.866096 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.866165 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:27Z","lastTransitionTime":"2026-02-16T17:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.969657 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.969732 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.969754 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.969782 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:27 crc kubenswrapper[4870]: I0216 17:01:27.969801 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:27Z","lastTransitionTime":"2026-02-16T17:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.079812 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.079886 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.079905 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.079933 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.079976 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:28Z","lastTransitionTime":"2026-02-16T17:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.184259 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.184313 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.184327 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.184348 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.184361 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:28Z","lastTransitionTime":"2026-02-16T17:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.223212 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:28 crc kubenswrapper[4870]: E0216 17:01:28.223470 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.235408 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 21:31:27.984167167 +0000 UTC Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.288268 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.288380 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.288401 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.288433 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.288451 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:28Z","lastTransitionTime":"2026-02-16T17:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.391836 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.391909 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.391931 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.391992 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.392018 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:28Z","lastTransitionTime":"2026-02-16T17:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.495120 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.495178 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.495190 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.495212 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.495225 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:28Z","lastTransitionTime":"2026-02-16T17:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.598219 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.598258 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.598268 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.598286 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.598297 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:28Z","lastTransitionTime":"2026-02-16T17:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.700802 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.700906 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.700935 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.701007 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.701033 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:28Z","lastTransitionTime":"2026-02-16T17:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.803564 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.803619 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.803634 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.803656 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.803670 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:28Z","lastTransitionTime":"2026-02-16T17:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.906998 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.907049 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.907062 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.907087 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:28 crc kubenswrapper[4870]: I0216 17:01:28.907104 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:28Z","lastTransitionTime":"2026-02-16T17:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.010985 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.011059 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.011079 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.011107 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.011126 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:29Z","lastTransitionTime":"2026-02-16T17:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.115088 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.115140 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.115152 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.115172 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.115187 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:29Z","lastTransitionTime":"2026-02-16T17:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.218424 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.218496 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.218514 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.218543 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.218564 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:29Z","lastTransitionTime":"2026-02-16T17:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.222792 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.222875 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:29 crc kubenswrapper[4870]: E0216 17:01:29.222976 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.222893 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:29 crc kubenswrapper[4870]: E0216 17:01:29.223179 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:29 crc kubenswrapper[4870]: E0216 17:01:29.223331 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.236398 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 13:28:09.695399672 +0000 UTC Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.321568 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.321629 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.321642 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.321666 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.321679 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:29Z","lastTransitionTime":"2026-02-16T17:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.425108 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.425216 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.425242 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.425281 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.425303 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:29Z","lastTransitionTime":"2026-02-16T17:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.528856 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.528922 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.528934 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.528988 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.529005 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:29Z","lastTransitionTime":"2026-02-16T17:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.632773 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.632830 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.632841 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.632860 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.632874 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:29Z","lastTransitionTime":"2026-02-16T17:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.735911 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.735967 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.735976 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.735996 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.736007 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:29Z","lastTransitionTime":"2026-02-16T17:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.838678 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.838714 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.838727 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.838743 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.838756 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:29Z","lastTransitionTime":"2026-02-16T17:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.941384 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.941448 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.941459 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.941484 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:29 crc kubenswrapper[4870]: I0216 17:01:29.941496 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:29Z","lastTransitionTime":"2026-02-16T17:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.043931 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.043993 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.044002 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.044020 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.044031 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:30Z","lastTransitionTime":"2026-02-16T17:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.147318 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.147393 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.147417 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.147455 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.147481 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:30Z","lastTransitionTime":"2026-02-16T17:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.222800 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:30 crc kubenswrapper[4870]: E0216 17:01:30.223313 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.236993 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 07:10:29.428291029 +0000 UTC Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.250583 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.250643 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.250656 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.250678 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.250693 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:30Z","lastTransitionTime":"2026-02-16T17:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.354594 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.354638 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.354648 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.354665 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.354677 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:30Z","lastTransitionTime":"2026-02-16T17:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.457576 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.457625 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.457636 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.457656 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.457669 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:30Z","lastTransitionTime":"2026-02-16T17:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.560004 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.560101 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.560113 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.560134 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.560145 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:30Z","lastTransitionTime":"2026-02-16T17:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.663614 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.663898 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.663970 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.664006 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.664025 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:30Z","lastTransitionTime":"2026-02-16T17:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.766399 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.766453 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.766465 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.766486 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.766500 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:30Z","lastTransitionTime":"2026-02-16T17:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.870901 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.870980 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.870997 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.871022 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.871035 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:30Z","lastTransitionTime":"2026-02-16T17:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.974722 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.974782 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.974794 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.974813 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:30 crc kubenswrapper[4870]: I0216 17:01:30.974827 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:30Z","lastTransitionTime":"2026-02-16T17:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.077253 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.077339 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.077362 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.077392 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.077409 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:31Z","lastTransitionTime":"2026-02-16T17:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.180228 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.180271 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.180283 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.180304 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.180317 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:31Z","lastTransitionTime":"2026-02-16T17:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.222822 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.222994 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.223304 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:31 crc kubenswrapper[4870]: E0216 17:01:31.223515 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:31 crc kubenswrapper[4870]: E0216 17:01:31.223708 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:31 crc kubenswrapper[4870]: E0216 17:01:31.223851 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.237203 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 14:06:20.980047348 +0000 UTC Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.282965 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.283011 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.283026 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.283047 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.283063 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:31Z","lastTransitionTime":"2026-02-16T17:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.386083 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.386127 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.386139 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.386159 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.386170 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:31Z","lastTransitionTime":"2026-02-16T17:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.489647 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.489707 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.489722 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.489744 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.489759 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:31Z","lastTransitionTime":"2026-02-16T17:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.592280 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.592324 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.592332 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.592350 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.592360 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:31Z","lastTransitionTime":"2026-02-16T17:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.695069 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.695141 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.695162 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.695192 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.695213 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:31Z","lastTransitionTime":"2026-02-16T17:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.797685 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.797772 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.797788 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.797816 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.797831 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:31Z","lastTransitionTime":"2026-02-16T17:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.900425 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.900474 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.900485 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.900502 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:31 crc kubenswrapper[4870]: I0216 17:01:31.900512 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:31Z","lastTransitionTime":"2026-02-16T17:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.003164 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.003218 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.003230 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.003248 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.003260 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:32Z","lastTransitionTime":"2026-02-16T17:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.119507 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.119579 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.119602 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.119636 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.119657 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:32Z","lastTransitionTime":"2026-02-16T17:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.222023 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:32 crc kubenswrapper[4870]: E0216 17:01:32.222192 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.222713 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.222796 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.222819 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.222849 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.222875 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:32Z","lastTransitionTime":"2026-02-16T17:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.238394 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 13:18:27.916683836 +0000 UTC Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.326204 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.326251 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.326262 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.326280 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.326290 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:32Z","lastTransitionTime":"2026-02-16T17:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.428450 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.428489 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.428497 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.428512 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.428521 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:32Z","lastTransitionTime":"2026-02-16T17:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.530866 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.530914 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.530927 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.530979 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.531000 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:32Z","lastTransitionTime":"2026-02-16T17:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.633421 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.633468 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.633481 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.633502 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.633515 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:32Z","lastTransitionTime":"2026-02-16T17:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.737115 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.737190 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.737209 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.737235 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.737252 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:32Z","lastTransitionTime":"2026-02-16T17:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.841408 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.841483 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.841502 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.841529 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.841554 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:32Z","lastTransitionTime":"2026-02-16T17:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.945821 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.945875 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.945894 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.945931 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:32 crc kubenswrapper[4870]: I0216 17:01:32.946001 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:32Z","lastTransitionTime":"2026-02-16T17:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.049832 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.049875 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.049883 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.049900 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.049913 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:33Z","lastTransitionTime":"2026-02-16T17:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.152708 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.152771 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.152781 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.152802 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.152824 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:33Z","lastTransitionTime":"2026-02-16T17:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.222596 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.222665 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.222746 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:33 crc kubenswrapper[4870]: E0216 17:01:33.222879 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:33 crc kubenswrapper[4870]: E0216 17:01:33.223082 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:33 crc kubenswrapper[4870]: E0216 17:01:33.223139 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.238926 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 18:57:53.528724774 +0000 UTC Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.255078 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.255131 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.255140 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.255156 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.255168 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:33Z","lastTransitionTime":"2026-02-16T17:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.358229 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.358279 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.358299 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.358330 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.358352 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:33Z","lastTransitionTime":"2026-02-16T17:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.461091 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.461165 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.461184 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.461204 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.461219 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:33Z","lastTransitionTime":"2026-02-16T17:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.563881 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.563986 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.564014 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.564038 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.564053 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:33Z","lastTransitionTime":"2026-02-16T17:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.667503 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.667554 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.667566 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.667587 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.667601 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:33Z","lastTransitionTime":"2026-02-16T17:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.770616 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.770679 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.770689 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.770709 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.770722 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:33Z","lastTransitionTime":"2026-02-16T17:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.874115 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.874197 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.874221 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.874259 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.874289 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:33Z","lastTransitionTime":"2026-02-16T17:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.977428 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.977495 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.977510 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.977535 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:33 crc kubenswrapper[4870]: I0216 17:01:33.977552 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:33Z","lastTransitionTime":"2026-02-16T17:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.079718 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.079766 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.079774 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.079790 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.079800 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:34Z","lastTransitionTime":"2026-02-16T17:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.187251 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.187331 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.187342 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.187361 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.187373 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:34Z","lastTransitionTime":"2026-02-16T17:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.222773 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:34 crc kubenswrapper[4870]: E0216 17:01:34.223025 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.239879 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:01:30.799440692 +0000 UTC Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.290166 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.290210 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.290226 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.290249 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.290262 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:34Z","lastTransitionTime":"2026-02-16T17:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.394328 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.394386 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.394396 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.394416 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.394427 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:34Z","lastTransitionTime":"2026-02-16T17:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.497872 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.498148 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.498156 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.498172 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.498181 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:34Z","lastTransitionTime":"2026-02-16T17:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.600849 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.600895 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.600906 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.600925 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.600939 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:34Z","lastTransitionTime":"2026-02-16T17:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.704071 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.704106 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.704115 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.704132 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.704146 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:34Z","lastTransitionTime":"2026-02-16T17:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.806542 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.806608 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.806621 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.806645 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.806658 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:34Z","lastTransitionTime":"2026-02-16T17:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.910383 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.910455 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.910472 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.910501 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:34 crc kubenswrapper[4870]: I0216 17:01:34.910523 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:34Z","lastTransitionTime":"2026-02-16T17:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.012989 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.013027 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.013037 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.013051 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.013060 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:35Z","lastTransitionTime":"2026-02-16T17:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.116205 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.116258 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.116268 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.116288 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.116299 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:35Z","lastTransitionTime":"2026-02-16T17:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.198873 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.198926 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.198940 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.198987 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.199003 4870 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:01:35Z","lastTransitionTime":"2026-02-16T17:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.222700 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:35 crc kubenswrapper[4870]: E0216 17:01:35.222818 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.223098 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:35 crc kubenswrapper[4870]: E0216 17:01:35.223173 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.223323 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:35 crc kubenswrapper[4870]: E0216 17:01:35.223401 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.240559 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 18:16:06.646035674 +0000 UTC Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.240631 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.252148 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-jjq54" podStartSLOduration=79.252126398 podStartE2EDuration="1m19.252126398s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:26.574064999 +0000 UTC m=+91.057529413" watchObservedRunningTime="2026-02-16 17:01:35.252126398 +0000 UTC m=+99.735590782" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.252726 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-8l52j"] Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.253262 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8l52j" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.255141 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.256484 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.256749 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.257181 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.262242 4870 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.309930 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fb6ad09b-b8e8-47ed-b24c-2f8aded03c93-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8l52j\" (UID: \"fb6ad09b-b8e8-47ed-b24c-2f8aded03c93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8l52j" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.310037 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fb6ad09b-b8e8-47ed-b24c-2f8aded03c93-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8l52j\" (UID: \"fb6ad09b-b8e8-47ed-b24c-2f8aded03c93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8l52j" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.310061 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb6ad09b-b8e8-47ed-b24c-2f8aded03c93-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8l52j\" (UID: \"fb6ad09b-b8e8-47ed-b24c-2f8aded03c93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8l52j" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.310081 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb6ad09b-b8e8-47ed-b24c-2f8aded03c93-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8l52j\" (UID: \"fb6ad09b-b8e8-47ed-b24c-2f8aded03c93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8l52j" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.310164 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fb6ad09b-b8e8-47ed-b24c-2f8aded03c93-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8l52j\" (UID: \"fb6ad09b-b8e8-47ed-b24c-2f8aded03c93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8l52j" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.411838 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fb6ad09b-b8e8-47ed-b24c-2f8aded03c93-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8l52j\" (UID: \"fb6ad09b-b8e8-47ed-b24c-2f8aded03c93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8l52j" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.411915 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb6ad09b-b8e8-47ed-b24c-2f8aded03c93-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8l52j\" (UID: \"fb6ad09b-b8e8-47ed-b24c-2f8aded03c93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8l52j" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.411973 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb6ad09b-b8e8-47ed-b24c-2f8aded03c93-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8l52j\" (UID: \"fb6ad09b-b8e8-47ed-b24c-2f8aded03c93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8l52j" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.412029 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fb6ad09b-b8e8-47ed-b24c-2f8aded03c93-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8l52j\" (UID: \"fb6ad09b-b8e8-47ed-b24c-2f8aded03c93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8l52j" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.412058 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fb6ad09b-b8e8-47ed-b24c-2f8aded03c93-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8l52j\" (UID: \"fb6ad09b-b8e8-47ed-b24c-2f8aded03c93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8l52j" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.412089 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fb6ad09b-b8e8-47ed-b24c-2f8aded03c93-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8l52j\" (UID: \"fb6ad09b-b8e8-47ed-b24c-2f8aded03c93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8l52j" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.412354 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fb6ad09b-b8e8-47ed-b24c-2f8aded03c93-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8l52j\" (UID: \"fb6ad09b-b8e8-47ed-b24c-2f8aded03c93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8l52j" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.413333 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fb6ad09b-b8e8-47ed-b24c-2f8aded03c93-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8l52j\" (UID: \"fb6ad09b-b8e8-47ed-b24c-2f8aded03c93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8l52j" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.428354 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb6ad09b-b8e8-47ed-b24c-2f8aded03c93-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8l52j\" (UID: \"fb6ad09b-b8e8-47ed-b24c-2f8aded03c93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8l52j" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.431800 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb6ad09b-b8e8-47ed-b24c-2f8aded03c93-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8l52j\" (UID: \"fb6ad09b-b8e8-47ed-b24c-2f8aded03c93\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8l52j" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.513343 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs\") pod \"network-metrics-daemon-zsfxc\" (UID: \"d13b0b83-258a-4545-b358-e08252dbbe87\") " pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:35 crc kubenswrapper[4870]: E0216 17:01:35.513576 4870 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:01:35 crc kubenswrapper[4870]: E0216 17:01:35.513723 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs podName:d13b0b83-258a-4545-b358-e08252dbbe87 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:39.513693431 +0000 UTC m=+163.997157915 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs") pod "network-metrics-daemon-zsfxc" (UID: "d13b0b83-258a-4545-b358-e08252dbbe87") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.574290 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8l52j" Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.824826 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8l52j" event={"ID":"fb6ad09b-b8e8-47ed-b24c-2f8aded03c93","Type":"ContainerStarted","Data":"194c0079813c7aa04c071f2034f90074992a6732f00dc7c28fc3c9e033f0c271"} Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.825280 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8l52j" event={"ID":"fb6ad09b-b8e8-47ed-b24c-2f8aded03c93","Type":"ContainerStarted","Data":"531d9ee711ee85c38ad68b3e801cc8f56ab001371b96a8c6d461fe0e8a95763a"} Feb 16 17:01:35 crc kubenswrapper[4870]: I0216 17:01:35.843683 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8l52j" podStartSLOduration=79.843658822 podStartE2EDuration="1m19.843658822s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:35.841939824 +0000 UTC m=+100.325404208" watchObservedRunningTime="2026-02-16 17:01:35.843658822 +0000 UTC m=+100.327123216" Feb 16 17:01:36 crc kubenswrapper[4870]: I0216 17:01:36.222400 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:36 crc kubenswrapper[4870]: E0216 17:01:36.225069 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:37 crc kubenswrapper[4870]: I0216 17:01:37.222319 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:37 crc kubenswrapper[4870]: I0216 17:01:37.222396 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:37 crc kubenswrapper[4870]: I0216 17:01:37.222479 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:37 crc kubenswrapper[4870]: E0216 17:01:37.222565 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:37 crc kubenswrapper[4870]: E0216 17:01:37.222778 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:37 crc kubenswrapper[4870]: E0216 17:01:37.223032 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:38 crc kubenswrapper[4870]: I0216 17:01:38.222406 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:38 crc kubenswrapper[4870]: I0216 17:01:38.223285 4870 scope.go:117] "RemoveContainer" containerID="5b5256536b28a974ba56223677edeff840dc94d3e8dbbbe1c03b09c1e82f6ffb" Feb 16 17:01:38 crc kubenswrapper[4870]: E0216 17:01:38.223038 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:38 crc kubenswrapper[4870]: E0216 17:01:38.232200 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-drrrv_openshift-ovn-kubernetes(650bce90-73d6-474d-ab19-f50252dc8bc3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" Feb 16 17:01:39 crc kubenswrapper[4870]: I0216 17:01:39.222926 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:39 crc kubenswrapper[4870]: I0216 17:01:39.223152 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:39 crc kubenswrapper[4870]: E0216 17:01:39.223343 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:39 crc kubenswrapper[4870]: E0216 17:01:39.223488 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:39 crc kubenswrapper[4870]: I0216 17:01:39.222905 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:39 crc kubenswrapper[4870]: E0216 17:01:39.223875 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:40 crc kubenswrapper[4870]: I0216 17:01:40.222757 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:40 crc kubenswrapper[4870]: E0216 17:01:40.223251 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:41 crc kubenswrapper[4870]: I0216 17:01:41.222828 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:41 crc kubenswrapper[4870]: I0216 17:01:41.222867 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:41 crc kubenswrapper[4870]: I0216 17:01:41.222983 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:41 crc kubenswrapper[4870]: E0216 17:01:41.223056 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:41 crc kubenswrapper[4870]: E0216 17:01:41.223177 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:41 crc kubenswrapper[4870]: E0216 17:01:41.223264 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:42 crc kubenswrapper[4870]: I0216 17:01:42.222589 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:42 crc kubenswrapper[4870]: E0216 17:01:42.222783 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:43 crc kubenswrapper[4870]: I0216 17:01:43.222530 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:43 crc kubenswrapper[4870]: I0216 17:01:43.222601 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:43 crc kubenswrapper[4870]: I0216 17:01:43.222647 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:43 crc kubenswrapper[4870]: E0216 17:01:43.222708 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:43 crc kubenswrapper[4870]: E0216 17:01:43.222797 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:43 crc kubenswrapper[4870]: E0216 17:01:43.222884 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:44 crc kubenswrapper[4870]: I0216 17:01:44.222567 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:44 crc kubenswrapper[4870]: E0216 17:01:44.223042 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:45 crc kubenswrapper[4870]: I0216 17:01:45.222813 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:45 crc kubenswrapper[4870]: I0216 17:01:45.222890 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:45 crc kubenswrapper[4870]: E0216 17:01:45.223068 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:45 crc kubenswrapper[4870]: I0216 17:01:45.222848 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:45 crc kubenswrapper[4870]: E0216 17:01:45.223212 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:45 crc kubenswrapper[4870]: E0216 17:01:45.223318 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:46 crc kubenswrapper[4870]: I0216 17:01:46.222301 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:46 crc kubenswrapper[4870]: E0216 17:01:46.223772 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:47 crc kubenswrapper[4870]: I0216 17:01:47.222349 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:47 crc kubenswrapper[4870]: E0216 17:01:47.222516 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:47 crc kubenswrapper[4870]: I0216 17:01:47.222884 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:47 crc kubenswrapper[4870]: E0216 17:01:47.223015 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:47 crc kubenswrapper[4870]: I0216 17:01:47.223307 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:47 crc kubenswrapper[4870]: E0216 17:01:47.223522 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:48 crc kubenswrapper[4870]: I0216 17:01:48.222207 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:48 crc kubenswrapper[4870]: E0216 17:01:48.223101 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:49 crc kubenswrapper[4870]: I0216 17:01:49.221874 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:49 crc kubenswrapper[4870]: I0216 17:01:49.221930 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:49 crc kubenswrapper[4870]: E0216 17:01:49.222071 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:49 crc kubenswrapper[4870]: I0216 17:01:49.221879 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:49 crc kubenswrapper[4870]: E0216 17:01:49.222176 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:49 crc kubenswrapper[4870]: E0216 17:01:49.222263 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:50 crc kubenswrapper[4870]: I0216 17:01:50.222450 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:50 crc kubenswrapper[4870]: E0216 17:01:50.222663 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:51 crc kubenswrapper[4870]: I0216 17:01:51.222627 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:51 crc kubenswrapper[4870]: I0216 17:01:51.222723 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:51 crc kubenswrapper[4870]: I0216 17:01:51.222885 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:51 crc kubenswrapper[4870]: E0216 17:01:51.222930 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:51 crc kubenswrapper[4870]: E0216 17:01:51.223095 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:51 crc kubenswrapper[4870]: E0216 17:01:51.223230 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:51 crc kubenswrapper[4870]: I0216 17:01:51.224573 4870 scope.go:117] "RemoveContainer" containerID="5b5256536b28a974ba56223677edeff840dc94d3e8dbbbe1c03b09c1e82f6ffb" Feb 16 17:01:51 crc kubenswrapper[4870]: E0216 17:01:51.224882 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-drrrv_openshift-ovn-kubernetes(650bce90-73d6-474d-ab19-f50252dc8bc3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" Feb 16 17:01:52 crc kubenswrapper[4870]: I0216 17:01:52.222632 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:52 crc kubenswrapper[4870]: E0216 17:01:52.222822 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:53 crc kubenswrapper[4870]: I0216 17:01:53.222777 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:53 crc kubenswrapper[4870]: I0216 17:01:53.222819 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:53 crc kubenswrapper[4870]: E0216 17:01:53.223071 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:53 crc kubenswrapper[4870]: E0216 17:01:53.223186 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:53 crc kubenswrapper[4870]: I0216 17:01:53.224054 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:53 crc kubenswrapper[4870]: E0216 17:01:53.224191 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:54 crc kubenswrapper[4870]: I0216 17:01:54.222541 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:54 crc kubenswrapper[4870]: E0216 17:01:54.222800 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:54 crc kubenswrapper[4870]: I0216 17:01:54.897328 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jjq54_52f144f1-d0b6-4871-a439-6aaf51304c4b/kube-multus/1.log" Feb 16 17:01:54 crc kubenswrapper[4870]: I0216 17:01:54.898087 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jjq54_52f144f1-d0b6-4871-a439-6aaf51304c4b/kube-multus/0.log" Feb 16 17:01:54 crc kubenswrapper[4870]: I0216 17:01:54.898134 4870 generic.go:334] "Generic (PLEG): container finished" podID="52f144f1-d0b6-4871-a439-6aaf51304c4b" containerID="f967b353df1fd88fc135755f8ab9d8f0b1dedc8b005d09f4191f821eccb01bdb" exitCode=1 Feb 16 17:01:54 crc kubenswrapper[4870]: I0216 17:01:54.898168 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jjq54" event={"ID":"52f144f1-d0b6-4871-a439-6aaf51304c4b","Type":"ContainerDied","Data":"f967b353df1fd88fc135755f8ab9d8f0b1dedc8b005d09f4191f821eccb01bdb"} Feb 16 17:01:54 crc kubenswrapper[4870]: I0216 17:01:54.898206 4870 scope.go:117] "RemoveContainer" containerID="2620d534b7556c614c40f17409bd7541c50a2fc5bdeae25b1c65abb456bab77e" Feb 16 17:01:54 crc kubenswrapper[4870]: I0216 17:01:54.898708 4870 scope.go:117] "RemoveContainer" containerID="f967b353df1fd88fc135755f8ab9d8f0b1dedc8b005d09f4191f821eccb01bdb" Feb 16 17:01:54 crc kubenswrapper[4870]: E0216 17:01:54.899067 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-jjq54_openshift-multus(52f144f1-d0b6-4871-a439-6aaf51304c4b)\"" pod="openshift-multus/multus-jjq54" podUID="52f144f1-d0b6-4871-a439-6aaf51304c4b" Feb 16 17:01:55 crc kubenswrapper[4870]: I0216 17:01:55.222530 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:55 crc kubenswrapper[4870]: E0216 17:01:55.222678 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:55 crc kubenswrapper[4870]: I0216 17:01:55.222871 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:55 crc kubenswrapper[4870]: E0216 17:01:55.222929 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:55 crc kubenswrapper[4870]: I0216 17:01:55.223064 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:55 crc kubenswrapper[4870]: E0216 17:01:55.223135 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:55 crc kubenswrapper[4870]: I0216 17:01:55.904605 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jjq54_52f144f1-d0b6-4871-a439-6aaf51304c4b/kube-multus/1.log" Feb 16 17:01:56 crc kubenswrapper[4870]: I0216 17:01:56.222399 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:56 crc kubenswrapper[4870]: E0216 17:01:56.224192 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:56 crc kubenswrapper[4870]: E0216 17:01:56.226734 4870 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 16 17:01:56 crc kubenswrapper[4870]: E0216 17:01:56.340191 4870 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:01:57 crc kubenswrapper[4870]: I0216 17:01:57.221932 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:57 crc kubenswrapper[4870]: I0216 17:01:57.222053 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:57 crc kubenswrapper[4870]: E0216 17:01:57.222138 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:57 crc kubenswrapper[4870]: E0216 17:01:57.222270 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:57 crc kubenswrapper[4870]: I0216 17:01:57.223138 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:57 crc kubenswrapper[4870]: E0216 17:01:57.223385 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:01:58 crc kubenswrapper[4870]: I0216 17:01:58.222920 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:01:58 crc kubenswrapper[4870]: E0216 17:01:58.223215 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:01:59 crc kubenswrapper[4870]: I0216 17:01:59.222623 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:01:59 crc kubenswrapper[4870]: I0216 17:01:59.222653 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:01:59 crc kubenswrapper[4870]: I0216 17:01:59.222870 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:01:59 crc kubenswrapper[4870]: E0216 17:01:59.222863 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:01:59 crc kubenswrapper[4870]: E0216 17:01:59.223032 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:01:59 crc kubenswrapper[4870]: E0216 17:01:59.223216 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:02:00 crc kubenswrapper[4870]: I0216 17:02:00.222493 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:02:00 crc kubenswrapper[4870]: E0216 17:02:00.222706 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:02:01 crc kubenswrapper[4870]: I0216 17:02:01.222820 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:02:01 crc kubenswrapper[4870]: I0216 17:02:01.223159 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:02:01 crc kubenswrapper[4870]: I0216 17:02:01.223265 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:02:01 crc kubenswrapper[4870]: E0216 17:02:01.224080 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:02:01 crc kubenswrapper[4870]: E0216 17:02:01.223907 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:02:01 crc kubenswrapper[4870]: E0216 17:02:01.224282 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:02:01 crc kubenswrapper[4870]: E0216 17:02:01.341528 4870 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:02:02 crc kubenswrapper[4870]: I0216 17:02:02.223543 4870 scope.go:117] "RemoveContainer" containerID="5b5256536b28a974ba56223677edeff840dc94d3e8dbbbe1c03b09c1e82f6ffb" Feb 16 17:02:02 crc kubenswrapper[4870]: I0216 17:02:02.224365 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:02:02 crc kubenswrapper[4870]: E0216 17:02:02.224514 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:02:02 crc kubenswrapper[4870]: I0216 17:02:02.931879 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovnkube-controller/3.log" Feb 16 17:02:02 crc kubenswrapper[4870]: I0216 17:02:02.934290 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerStarted","Data":"56684ef2f5624504b03a4bdb4ab63cce4c9921ce5de95801542a0875054b1bc4"} Feb 16 17:02:02 crc kubenswrapper[4870]: I0216 17:02:02.935295 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:02:02 crc kubenswrapper[4870]: I0216 17:02:02.961672 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" podStartSLOduration=106.961623712 podStartE2EDuration="1m46.961623712s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:02.961519509 +0000 UTC m=+127.444983893" watchObservedRunningTime="2026-02-16 17:02:02.961623712 +0000 UTC m=+127.445088116" Feb 16 17:02:03 crc kubenswrapper[4870]: I0216 17:02:03.222782 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:02:03 crc kubenswrapper[4870]: I0216 17:02:03.223011 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:02:03 crc kubenswrapper[4870]: E0216 17:02:03.223333 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:02:03 crc kubenswrapper[4870]: I0216 17:02:03.223041 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:02:03 crc kubenswrapper[4870]: E0216 17:02:03.223521 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:02:03 crc kubenswrapper[4870]: E0216 17:02:03.223725 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:02:03 crc kubenswrapper[4870]: I0216 17:02:03.331644 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-zsfxc"] Feb 16 17:02:03 crc kubenswrapper[4870]: I0216 17:02:03.938312 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:02:03 crc kubenswrapper[4870]: E0216 17:02:03.939022 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:02:04 crc kubenswrapper[4870]: I0216 17:02:04.228825 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:02:04 crc kubenswrapper[4870]: E0216 17:02:04.229026 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:02:05 crc kubenswrapper[4870]: I0216 17:02:05.222376 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:02:05 crc kubenswrapper[4870]: I0216 17:02:05.222416 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:02:05 crc kubenswrapper[4870]: E0216 17:02:05.222602 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:02:05 crc kubenswrapper[4870]: I0216 17:02:05.222426 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:02:05 crc kubenswrapper[4870]: E0216 17:02:05.222736 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:02:05 crc kubenswrapper[4870]: E0216 17:02:05.222845 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:02:06 crc kubenswrapper[4870]: I0216 17:02:06.222674 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:02:06 crc kubenswrapper[4870]: E0216 17:02:06.223742 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:02:06 crc kubenswrapper[4870]: E0216 17:02:06.342528 4870 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:02:07 crc kubenswrapper[4870]: I0216 17:02:07.222793 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:02:07 crc kubenswrapper[4870]: I0216 17:02:07.222983 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:02:07 crc kubenswrapper[4870]: I0216 17:02:07.222865 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:02:07 crc kubenswrapper[4870]: E0216 17:02:07.223319 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:02:07 crc kubenswrapper[4870]: E0216 17:02:07.223437 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:02:07 crc kubenswrapper[4870]: E0216 17:02:07.223475 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:02:08 crc kubenswrapper[4870]: I0216 17:02:08.222552 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:02:08 crc kubenswrapper[4870]: E0216 17:02:08.222723 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:02:09 crc kubenswrapper[4870]: I0216 17:02:09.222450 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:02:09 crc kubenswrapper[4870]: I0216 17:02:09.222478 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:02:09 crc kubenswrapper[4870]: I0216 17:02:09.222535 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:02:09 crc kubenswrapper[4870]: E0216 17:02:09.222654 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:02:09 crc kubenswrapper[4870]: E0216 17:02:09.222731 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:02:09 crc kubenswrapper[4870]: E0216 17:02:09.223016 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:02:09 crc kubenswrapper[4870]: I0216 17:02:09.223220 4870 scope.go:117] "RemoveContainer" containerID="f967b353df1fd88fc135755f8ab9d8f0b1dedc8b005d09f4191f821eccb01bdb" Feb 16 17:02:09 crc kubenswrapper[4870]: I0216 17:02:09.964425 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jjq54_52f144f1-d0b6-4871-a439-6aaf51304c4b/kube-multus/1.log" Feb 16 17:02:09 crc kubenswrapper[4870]: I0216 17:02:09.965292 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jjq54" event={"ID":"52f144f1-d0b6-4871-a439-6aaf51304c4b","Type":"ContainerStarted","Data":"b0a335f8947cdf12560eade87cdde71ba410a4fb308365d680fba7d66dfa88b5"} Feb 16 17:02:10 crc kubenswrapper[4870]: I0216 17:02:10.222800 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:02:10 crc kubenswrapper[4870]: E0216 17:02:10.223025 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 17:02:11 crc kubenswrapper[4870]: I0216 17:02:11.222321 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:02:11 crc kubenswrapper[4870]: I0216 17:02:11.222492 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:02:11 crc kubenswrapper[4870]: I0216 17:02:11.223062 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:02:11 crc kubenswrapper[4870]: E0216 17:02:11.223313 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 17:02:11 crc kubenswrapper[4870]: E0216 17:02:11.223483 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 17:02:11 crc kubenswrapper[4870]: E0216 17:02:11.224093 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zsfxc" podUID="d13b0b83-258a-4545-b358-e08252dbbe87" Feb 16 17:02:12 crc kubenswrapper[4870]: I0216 17:02:12.222393 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:02:12 crc kubenswrapper[4870]: I0216 17:02:12.226211 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 17:02:12 crc kubenswrapper[4870]: I0216 17:02:12.226442 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 17:02:13 crc kubenswrapper[4870]: I0216 17:02:13.222134 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:02:13 crc kubenswrapper[4870]: I0216 17:02:13.222298 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:02:13 crc kubenswrapper[4870]: I0216 17:02:13.222460 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:02:13 crc kubenswrapper[4870]: I0216 17:02:13.224864 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 17:02:13 crc kubenswrapper[4870]: I0216 17:02:13.225605 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 17:02:13 crc kubenswrapper[4870]: I0216 17:02:13.225846 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 17:02:13 crc kubenswrapper[4870]: I0216 17:02:13.226227 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.885311 4870 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.939192 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4gt6"] Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.940003 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4gt6" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.945610 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c5snp"] Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.945844 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.946295 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.946399 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8"] Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.946483 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.946692 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.946858 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.947142 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cnjq9"] Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.947357 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.947376 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.948319 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jz2fc"] Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.948869 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-tlsvw"] Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.949122 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.949197 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jz2fc" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.950052 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs"] Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.950198 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tlsvw" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.950873 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.952421 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.954402 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.954680 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.955450 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.955748 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.955938 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.956066 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.956118 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.956191 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.957170 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.957194 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.959655 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.960212 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-br5s9"] Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.961016 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-br5s9" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.962140 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.962318 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.962337 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.962437 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.962160 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.963051 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.963400 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.963608 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.963930 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.965006 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5ll8r"] Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.965115 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.965435 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.965520 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.965809 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.966082 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.978356 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.980138 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/775a2c57-6603-456c-90f8-13116521d18d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-k4gt6\" (UID: \"775a2c57-6603-456c-90f8-13116521d18d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4gt6" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.980213 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/775a2c57-6603-456c-90f8-13116521d18d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-k4gt6\" (UID: \"775a2c57-6603-456c-90f8-13116521d18d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4gt6" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.980262 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdkj2\" (UniqueName: \"kubernetes.io/projected/775a2c57-6603-456c-90f8-13116521d18d-kube-api-access-sdkj2\") pod \"openshift-apiserver-operator-796bbdcf4f-k4gt6\" (UID: \"775a2c57-6603-456c-90f8-13116521d18d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4gt6" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.981688 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.982094 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.982452 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.983051 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.983415 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.985414 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.987766 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 17:02:15 crc kubenswrapper[4870]: I0216 17:02:15.988697 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.006603 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.007529 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.007721 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.007797 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.010361 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jz2fc"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.010630 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.010824 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.010863 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.010925 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.011074 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.011271 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4gt6"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.011452 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.011608 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.011786 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.011876 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.012208 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.012263 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.013137 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.021536 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c5snp"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.012217 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.012319 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.012170 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.012355 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.012366 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.012411 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.012535 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.011977 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.014455 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.019759 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.019861 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.019984 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.025346 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-fplj9"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.026130 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fplj9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.026127 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-n96b6"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.033396 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.037005 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.039309 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.039755 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.041568 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.041962 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.042229 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-j4dv8"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.042248 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.042333 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.043962 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-b66lf"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.044412 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b5nnl"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.044803 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cf6bm"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.044942 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.045536 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lv95l"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.053062 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-b66lf" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.054020 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b5nnl" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.056252 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.061515 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.066769 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.067397 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.094636 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/775a2c57-6603-456c-90f8-13116521d18d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-k4gt6\" (UID: \"775a2c57-6603-456c-90f8-13116521d18d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4gt6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.094705 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c1cae5d-9592-47f5-9c64-301163ac7b1a-config\") pod \"machine-approver-56656f9798-tlsvw\" (UID: \"9c1cae5d-9592-47f5-9c64-301163ac7b1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tlsvw" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.095251 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbc88\" (UniqueName: \"kubernetes.io/projected/9c1cae5d-9592-47f5-9c64-301163ac7b1a-kube-api-access-sbc88\") pod \"machine-approver-56656f9798-tlsvw\" (UID: \"9c1cae5d-9592-47f5-9c64-301163ac7b1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tlsvw" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.095292 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d2s9\" (UniqueName: \"kubernetes.io/projected/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-kube-api-access-2d2s9\") pod \"controller-manager-879f6c89f-c5snp\" (UID: \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.095333 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/775a2c57-6603-456c-90f8-13116521d18d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-k4gt6\" (UID: \"775a2c57-6603-456c-90f8-13116521d18d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4gt6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.095350 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-serving-cert\") pod \"controller-manager-879f6c89f-c5snp\" (UID: \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.096052 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/775a2c57-6603-456c-90f8-13116521d18d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-k4gt6\" (UID: \"775a2c57-6603-456c-90f8-13116521d18d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4gt6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.098353 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdkj2\" (UniqueName: \"kubernetes.io/projected/775a2c57-6603-456c-90f8-13116521d18d-kube-api-access-sdkj2\") pod \"openshift-apiserver-operator-796bbdcf4f-k4gt6\" (UID: \"775a2c57-6603-456c-90f8-13116521d18d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4gt6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.119285 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-client-ca\") pod \"controller-manager-879f6c89f-c5snp\" (UID: \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.119584 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-config\") pod \"controller-manager-879f6c89f-c5snp\" (UID: \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.119692 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9c1cae5d-9592-47f5-9c64-301163ac7b1a-machine-approver-tls\") pod \"machine-approver-56656f9798-tlsvw\" (UID: \"9c1cae5d-9592-47f5-9c64-301163ac7b1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tlsvw" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.119771 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9c1cae5d-9592-47f5-9c64-301163ac7b1a-auth-proxy-config\") pod \"machine-approver-56656f9798-tlsvw\" (UID: \"9c1cae5d-9592-47f5-9c64-301163ac7b1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tlsvw" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.119855 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-c5snp\" (UID: \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.099011 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.101101 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8q7hf"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.120873 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pnznr"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.101246 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lv95l" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.124665 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8df2c"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.124783 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8q7hf" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.125126 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pnznr" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.099827 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.100715 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.101564 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.101616 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.101694 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.101698 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.101759 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.101770 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.101814 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.101830 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.106073 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.107346 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.109196 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.109257 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.115480 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.115849 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.115889 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.115967 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.117600 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.126532 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.126719 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.126859 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.127059 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.127151 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zgkhs"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.127722 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zgkhs" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.128153 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8df2c" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.128805 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.127160 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.129882 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.130497 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.133516 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.133628 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4nxwd"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.133872 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/775a2c57-6603-456c-90f8-13116521d18d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-k4gt6\" (UID: \"775a2c57-6603-456c-90f8-13116521d18d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4gt6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.134381 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4nxwd" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.140063 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.143137 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.144036 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.145909 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bn28c"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.146574 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bn28c" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.146774 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tcchp"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.149461 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jfvff"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.149578 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tcchp" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.150347 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jxk8b"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.150876 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.151408 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jfvff" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.151573 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jxk8b" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.153138 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4k6sf"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.153327 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.155606 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5ll8r"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.155630 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f4pwk"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.155756 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4k6sf" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.155939 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.156262 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f4pwk" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.156771 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.158051 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v25cw"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.160542 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v25cw" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.161537 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9d2ls"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.163441 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9d2ls" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.164885 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.174688 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.184542 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4jpbt"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.188720 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.190616 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8rbb8"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.194923 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8rbb8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.194619 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cnjq9"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.197198 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-br5s9"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.197853 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdkj2\" (UniqueName: \"kubernetes.io/projected/775a2c57-6603-456c-90f8-13116521d18d-kube-api-access-sdkj2\") pod \"openshift-apiserver-operator-796bbdcf4f-k4gt6\" (UID: \"775a2c57-6603-456c-90f8-13116521d18d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4gt6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.197959 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rpzss"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.199101 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m4tjr"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.199576 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rpzss" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.199617 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m4tjr" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.201343 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rqjhl"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.202095 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rqjhl" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.203421 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.204140 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.205742 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8sggp"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.207517 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8sggp" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.208417 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b5nnl"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.209700 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fplj9"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.210974 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-n96b6"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.212807 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-j4dv8"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.213863 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zgkhs"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.215541 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8q7hf"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.216683 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f4pwk"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.218510 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jfvff"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.220149 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.220723 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d988003-748a-4bb4-ac42-c38d41a5295b-serving-cert\") pod \"authentication-operator-69f744f599-jz2fc\" (UID: \"3d988003-748a-4bb4-ac42-c38d41a5295b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jz2fc" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.220788 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr8gp\" (UniqueName: \"kubernetes.io/projected/3d988003-748a-4bb4-ac42-c38d41a5295b-kube-api-access-fr8gp\") pod \"authentication-operator-69f744f599-jz2fc\" (UID: \"3d988003-748a-4bb4-ac42-c38d41a5295b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jz2fc" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.220840 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d2s9\" (UniqueName: \"kubernetes.io/projected/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-kube-api-access-2d2s9\") pod \"controller-manager-879f6c89f-c5snp\" (UID: \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.220908 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.220976 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221021 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/559fe54f-c6f9-4466-b9c8-da6318fc8f59-node-pullsecrets\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221045 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/db804a3b-9f2e-4638-ae79-7ef21a87104d-audit-dir\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221065 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4524185d-72ed-4ff4-be99-2a01cf133dbc-service-ca-bundle\") pod \"router-default-5444994796-b66lf\" (UID: \"4524185d-72ed-4ff4-be99-2a01cf133dbc\") " pod="openshift-ingress/router-default-5444994796-b66lf" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221087 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4524185d-72ed-4ff4-be99-2a01cf133dbc-stats-auth\") pod \"router-default-5444994796-b66lf\" (UID: \"4524185d-72ed-4ff4-be99-2a01cf133dbc\") " pod="openshift-ingress/router-default-5444994796-b66lf" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221141 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/559fe54f-c6f9-4466-b9c8-da6318fc8f59-image-import-ca\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221177 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221216 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b31442-b3cc-486b-8fd1-e968978c9f1c-config\") pod \"machine-api-operator-5694c8668f-br5s9\" (UID: \"68b31442-b3cc-486b-8fd1-e968978c9f1c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-br5s9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221281 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d988003-748a-4bb4-ac42-c38d41a5295b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jz2fc\" (UID: \"3d988003-748a-4bb4-ac42-c38d41a5295b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jz2fc" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221316 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d988003-748a-4bb4-ac42-c38d41a5295b-config\") pod \"authentication-operator-69f744f599-jz2fc\" (UID: \"3d988003-748a-4bb4-ac42-c38d41a5295b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jz2fc" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221362 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221386 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221426 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-audit-policies\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221455 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-serving-cert\") pod \"controller-manager-879f6c89f-c5snp\" (UID: \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221482 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a30d36bf-bb06-4237-9273-eeee9188e931-audit-policies\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221502 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221521 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e138835e-4175-41cf-983c-6940600a8d32-etcd-ca\") pod \"etcd-operator-b45778765-j4dv8\" (UID: \"e138835e-4175-41cf-983c-6940600a8d32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221541 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-oauth-serving-cert\") pod \"console-f9d7485db-n96b6\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221558 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8svf4\" (UniqueName: \"kubernetes.io/projected/db804a3b-9f2e-4638-ae79-7ef21a87104d-kube-api-access-8svf4\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221578 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qghtr\" (UniqueName: \"kubernetes.io/projected/e138835e-4175-41cf-983c-6940600a8d32-kube-api-access-qghtr\") pod \"etcd-operator-b45778765-j4dv8\" (UID: \"e138835e-4175-41cf-983c-6940600a8d32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221613 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/559fe54f-c6f9-4466-b9c8-da6318fc8f59-etcd-client\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221634 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/559fe54f-c6f9-4466-b9c8-da6318fc8f59-serving-cert\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221650 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221703 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdlhm\" (UniqueName: \"kubernetes.io/projected/b0e0ea5e-92af-42e9-9f96-809c376bcc69-kube-api-access-mdlhm\") pod \"route-controller-manager-6576b87f9c-g6xvs\" (UID: \"b0e0ea5e-92af-42e9-9f96-809c376bcc69\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221720 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/559fe54f-c6f9-4466-b9c8-da6318fc8f59-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221738 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a30d36bf-bb06-4237-9273-eeee9188e931-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221763 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-config\") pod \"controller-manager-879f6c89f-c5snp\" (UID: \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221784 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-client-ca\") pod \"controller-manager-879f6c89f-c5snp\" (UID: \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221802 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtkck\" (UniqueName: \"kubernetes.io/projected/a30d36bf-bb06-4237-9273-eeee9188e931-kube-api-access-qtkck\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221819 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e138835e-4175-41cf-983c-6940600a8d32-etcd-service-ca\") pod \"etcd-operator-b45778765-j4dv8\" (UID: \"e138835e-4175-41cf-983c-6940600a8d32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221839 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a30d36bf-bb06-4237-9273-eeee9188e931-audit-dir\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221857 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221878 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221894 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e138835e-4175-41cf-983c-6940600a8d32-config\") pod \"etcd-operator-b45778765-j4dv8\" (UID: \"e138835e-4175-41cf-983c-6940600a8d32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221916 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/559fe54f-c6f9-4466-b9c8-da6318fc8f59-audit\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221931 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d988003-748a-4bb4-ac42-c38d41a5295b-service-ca-bundle\") pod \"authentication-operator-69f744f599-jz2fc\" (UID: \"3d988003-748a-4bb4-ac42-c38d41a5295b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jz2fc" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221965 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0e0ea5e-92af-42e9-9f96-809c376bcc69-client-ca\") pod \"route-controller-manager-6576b87f9c-g6xvs\" (UID: \"b0e0ea5e-92af-42e9-9f96-809c376bcc69\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.221995 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-console-config\") pod \"console-f9d7485db-n96b6\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222012 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-service-ca\") pod \"console-f9d7485db-n96b6\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222030 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/559fe54f-c6f9-4466-b9c8-da6318fc8f59-audit-dir\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222046 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e138835e-4175-41cf-983c-6940600a8d32-etcd-client\") pod \"etcd-operator-b45778765-j4dv8\" (UID: \"e138835e-4175-41cf-983c-6940600a8d32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222061 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-trusted-ca-bundle\") pod \"console-f9d7485db-n96b6\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222082 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9c1cae5d-9592-47f5-9c64-301163ac7b1a-machine-approver-tls\") pod \"machine-approver-56656f9798-tlsvw\" (UID: \"9c1cae5d-9592-47f5-9c64-301163ac7b1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tlsvw" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222102 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9c1cae5d-9592-47f5-9c64-301163ac7b1a-auth-proxy-config\") pod \"machine-approver-56656f9798-tlsvw\" (UID: \"9c1cae5d-9592-47f5-9c64-301163ac7b1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tlsvw" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222118 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/559fe54f-c6f9-4466-b9c8-da6318fc8f59-config\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222135 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a30d36bf-bb06-4237-9273-eeee9188e931-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222165 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222190 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-c5snp\" (UID: \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222212 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht6kl\" (UniqueName: \"kubernetes.io/projected/4524185d-72ed-4ff4-be99-2a01cf133dbc-kube-api-access-ht6kl\") pod \"router-default-5444994796-b66lf\" (UID: \"4524185d-72ed-4ff4-be99-2a01cf133dbc\") " pod="openshift-ingress/router-default-5444994796-b66lf" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222242 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/559fe54f-c6f9-4466-b9c8-da6318fc8f59-etcd-serving-ca\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222268 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccx9v\" (UniqueName: \"kubernetes.io/projected/cc3d5e28-d52f-41d9-8360-faa12e014349-kube-api-access-ccx9v\") pod \"olm-operator-6b444d44fb-b5nnl\" (UID: \"cc3d5e28-d52f-41d9-8360-faa12e014349\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b5nnl" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222298 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e138835e-4175-41cf-983c-6940600a8d32-serving-cert\") pod \"etcd-operator-b45778765-j4dv8\" (UID: \"e138835e-4175-41cf-983c-6940600a8d32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222316 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4524185d-72ed-4ff4-be99-2a01cf133dbc-default-certificate\") pod \"router-default-5444994796-b66lf\" (UID: \"4524185d-72ed-4ff4-be99-2a01cf133dbc\") " pod="openshift-ingress/router-default-5444994796-b66lf" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222334 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc3d5e28-d52f-41d9-8360-faa12e014349-profile-collector-cert\") pod \"olm-operator-6b444d44fb-b5nnl\" (UID: \"cc3d5e28-d52f-41d9-8360-faa12e014349\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b5nnl" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222360 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed053e72-4999-4b5d-a9f3-c58b92280c8c-console-serving-cert\") pod \"console-f9d7485db-n96b6\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222380 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/559fe54f-c6f9-4466-b9c8-da6318fc8f59-encryption-config\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222396 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78w85\" (UniqueName: \"kubernetes.io/projected/559fe54f-c6f9-4466-b9c8-da6318fc8f59-kube-api-access-78w85\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222411 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/68b31442-b3cc-486b-8fd1-e968978c9f1c-images\") pod \"machine-api-operator-5694c8668f-br5s9\" (UID: \"68b31442-b3cc-486b-8fd1-e968978c9f1c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-br5s9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222428 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d84ht\" (UniqueName: \"kubernetes.io/projected/68b31442-b3cc-486b-8fd1-e968978c9f1c-kube-api-access-d84ht\") pod \"machine-api-operator-5694c8668f-br5s9\" (UID: \"68b31442-b3cc-486b-8fd1-e968978c9f1c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-br5s9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222450 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a30d36bf-bb06-4237-9273-eeee9188e931-serving-cert\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222466 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222483 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw64s\" (UniqueName: \"kubernetes.io/projected/6cb09d74-7044-4c8c-a89b-6bf4593ffb9d-kube-api-access-hw64s\") pod \"downloads-7954f5f757-fplj9\" (UID: \"6cb09d74-7044-4c8c-a89b-6bf4593ffb9d\") " pod="openshift-console/downloads-7954f5f757-fplj9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222504 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0e0ea5e-92af-42e9-9f96-809c376bcc69-config\") pod \"route-controller-manager-6576b87f9c-g6xvs\" (UID: \"b0e0ea5e-92af-42e9-9f96-809c376bcc69\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222531 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a30d36bf-bb06-4237-9273-eeee9188e931-etcd-client\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222548 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed053e72-4999-4b5d-a9f3-c58b92280c8c-console-oauth-config\") pod \"console-f9d7485db-n96b6\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222564 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hmnw\" (UniqueName: \"kubernetes.io/projected/ed053e72-4999-4b5d-a9f3-c58b92280c8c-kube-api-access-8hmnw\") pod \"console-f9d7485db-n96b6\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222582 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/68b31442-b3cc-486b-8fd1-e968978c9f1c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-br5s9\" (UID: \"68b31442-b3cc-486b-8fd1-e968978c9f1c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-br5s9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222599 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc3d5e28-d52f-41d9-8360-faa12e014349-srv-cert\") pod \"olm-operator-6b444d44fb-b5nnl\" (UID: \"cc3d5e28-d52f-41d9-8360-faa12e014349\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b5nnl" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222615 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0e0ea5e-92af-42e9-9f96-809c376bcc69-serving-cert\") pod \"route-controller-manager-6576b87f9c-g6xvs\" (UID: \"b0e0ea5e-92af-42e9-9f96-809c376bcc69\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222640 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c1cae5d-9592-47f5-9c64-301163ac7b1a-config\") pod \"machine-approver-56656f9798-tlsvw\" (UID: \"9c1cae5d-9592-47f5-9c64-301163ac7b1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tlsvw" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222656 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a30d36bf-bb06-4237-9273-eeee9188e931-encryption-config\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222673 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4524185d-72ed-4ff4-be99-2a01cf133dbc-metrics-certs\") pod \"router-default-5444994796-b66lf\" (UID: \"4524185d-72ed-4ff4-be99-2a01cf133dbc\") " pod="openshift-ingress/router-default-5444994796-b66lf" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.222706 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbc88\" (UniqueName: \"kubernetes.io/projected/9c1cae5d-9592-47f5-9c64-301163ac7b1a-kube-api-access-sbc88\") pod \"machine-approver-56656f9798-tlsvw\" (UID: \"9c1cae5d-9592-47f5-9c64-301163ac7b1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tlsvw" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.224097 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.224871 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-client-ca\") pod \"controller-manager-879f6c89f-c5snp\" (UID: \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.225128 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c1cae5d-9592-47f5-9c64-301163ac7b1a-config\") pod \"machine-approver-56656f9798-tlsvw\" (UID: \"9c1cae5d-9592-47f5-9c64-301163ac7b1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tlsvw" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.225369 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9c1cae5d-9592-47f5-9c64-301163ac7b1a-auth-proxy-config\") pod \"machine-approver-56656f9798-tlsvw\" (UID: \"9c1cae5d-9592-47f5-9c64-301163ac7b1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tlsvw" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.226282 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-c5snp\" (UID: \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.227111 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-serving-cert\") pod \"controller-manager-879f6c89f-c5snp\" (UID: \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.229114 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9c1cae5d-9592-47f5-9c64-301163ac7b1a-machine-approver-tls\") pod \"machine-approver-56656f9798-tlsvw\" (UID: \"9c1cae5d-9592-47f5-9c64-301163ac7b1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tlsvw" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.229134 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-config\") pod \"controller-manager-879f6c89f-c5snp\" (UID: \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.230172 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bn28c"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.230205 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jxk8b"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.230463 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tcchp"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.231636 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.233198 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.234454 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8df2c"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.235763 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-g78th"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.236919 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-g78th" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.236991 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-k28wm"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.237978 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-k28wm" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.238258 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4nxwd"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.240969 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4k6sf"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.241843 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4jpbt"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.243284 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lv95l"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.244555 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pnznr"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.246046 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m4tjr"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.247440 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cf6bm"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.248831 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8rbb8"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.249635 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.250484 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9d2ls"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.251853 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v25cw"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.253161 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.260113 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-k28wm"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.260204 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rpzss"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.263420 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-g78th"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.263980 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.267106 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4gt6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.267598 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rqjhl"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.273502 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8sggp"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.275058 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-fthnr"] Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.277337 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fthnr" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.283270 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.303572 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.323177 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a30d36bf-bb06-4237-9273-eeee9188e931-serving-cert\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.323310 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d84ht\" (UniqueName: \"kubernetes.io/projected/68b31442-b3cc-486b-8fd1-e968978c9f1c-kube-api-access-d84ht\") pod \"machine-api-operator-5694c8668f-br5s9\" (UID: \"68b31442-b3cc-486b-8fd1-e968978c9f1c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-br5s9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.323399 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw64s\" (UniqueName: \"kubernetes.io/projected/6cb09d74-7044-4c8c-a89b-6bf4593ffb9d-kube-api-access-hw64s\") pod \"downloads-7954f5f757-fplj9\" (UID: \"6cb09d74-7044-4c8c-a89b-6bf4593ffb9d\") " pod="openshift-console/downloads-7954f5f757-fplj9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.323471 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0e0ea5e-92af-42e9-9f96-809c376bcc69-config\") pod \"route-controller-manager-6576b87f9c-g6xvs\" (UID: \"b0e0ea5e-92af-42e9-9f96-809c376bcc69\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.323546 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.323619 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed053e72-4999-4b5d-a9f3-c58b92280c8c-console-oauth-config\") pod \"console-f9d7485db-n96b6\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.323699 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hmnw\" (UniqueName: \"kubernetes.io/projected/ed053e72-4999-4b5d-a9f3-c58b92280c8c-kube-api-access-8hmnw\") pod \"console-f9d7485db-n96b6\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.323781 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a30d36bf-bb06-4237-9273-eeee9188e931-etcd-client\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.324091 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc3d5e28-d52f-41d9-8360-faa12e014349-srv-cert\") pod \"olm-operator-6b444d44fb-b5nnl\" (UID: \"cc3d5e28-d52f-41d9-8360-faa12e014349\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b5nnl" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.324214 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0e0ea5e-92af-42e9-9f96-809c376bcc69-serving-cert\") pod \"route-controller-manager-6576b87f9c-g6xvs\" (UID: \"b0e0ea5e-92af-42e9-9f96-809c376bcc69\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.324302 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/68b31442-b3cc-486b-8fd1-e968978c9f1c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-br5s9\" (UID: \"68b31442-b3cc-486b-8fd1-e968978c9f1c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-br5s9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.324850 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a30d36bf-bb06-4237-9273-eeee9188e931-encryption-config\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.324940 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4524185d-72ed-4ff4-be99-2a01cf133dbc-metrics-certs\") pod \"router-default-5444994796-b66lf\" (UID: \"4524185d-72ed-4ff4-be99-2a01cf133dbc\") " pod="openshift-ingress/router-default-5444994796-b66lf" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.325067 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d988003-748a-4bb4-ac42-c38d41a5295b-serving-cert\") pod \"authentication-operator-69f744f599-jz2fc\" (UID: \"3d988003-748a-4bb4-ac42-c38d41a5295b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jz2fc" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.325144 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr8gp\" (UniqueName: \"kubernetes.io/projected/3d988003-748a-4bb4-ac42-c38d41a5295b-kube-api-access-fr8gp\") pod \"authentication-operator-69f744f599-jz2fc\" (UID: \"3d988003-748a-4bb4-ac42-c38d41a5295b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jz2fc" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.325233 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/559fe54f-c6f9-4466-b9c8-da6318fc8f59-node-pullsecrets\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.325312 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/db804a3b-9f2e-4638-ae79-7ef21a87104d-audit-dir\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.325385 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.325460 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.325489 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.325539 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4524185d-72ed-4ff4-be99-2a01cf133dbc-service-ca-bundle\") pod \"router-default-5444994796-b66lf\" (UID: \"4524185d-72ed-4ff4-be99-2a01cf133dbc\") " pod="openshift-ingress/router-default-5444994796-b66lf" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.325748 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4524185d-72ed-4ff4-be99-2a01cf133dbc-stats-auth\") pod \"router-default-5444994796-b66lf\" (UID: \"4524185d-72ed-4ff4-be99-2a01cf133dbc\") " pod="openshift-ingress/router-default-5444994796-b66lf" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.325792 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.325827 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/559fe54f-c6f9-4466-b9c8-da6318fc8f59-image-import-ca\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.325857 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d988003-748a-4bb4-ac42-c38d41a5295b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jz2fc\" (UID: \"3d988003-748a-4bb4-ac42-c38d41a5295b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jz2fc" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.325882 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b31442-b3cc-486b-8fd1-e968978c9f1c-config\") pod \"machine-api-operator-5694c8668f-br5s9\" (UID: \"68b31442-b3cc-486b-8fd1-e968978c9f1c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-br5s9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.325910 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.325933 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.325968 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d988003-748a-4bb4-ac42-c38d41a5295b-config\") pod \"authentication-operator-69f744f599-jz2fc\" (UID: \"3d988003-748a-4bb4-ac42-c38d41a5295b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jz2fc" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326017 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-audit-policies\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326042 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a30d36bf-bb06-4237-9273-eeee9188e931-audit-policies\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326064 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326085 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e138835e-4175-41cf-983c-6940600a8d32-etcd-ca\") pod \"etcd-operator-b45778765-j4dv8\" (UID: \"e138835e-4175-41cf-983c-6940600a8d32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326109 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-oauth-serving-cert\") pod \"console-f9d7485db-n96b6\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326137 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/559fe54f-c6f9-4466-b9c8-da6318fc8f59-etcd-client\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326158 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/559fe54f-c6f9-4466-b9c8-da6318fc8f59-serving-cert\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326181 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326203 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8svf4\" (UniqueName: \"kubernetes.io/projected/db804a3b-9f2e-4638-ae79-7ef21a87104d-kube-api-access-8svf4\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326223 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qghtr\" (UniqueName: \"kubernetes.io/projected/e138835e-4175-41cf-983c-6940600a8d32-kube-api-access-qghtr\") pod \"etcd-operator-b45778765-j4dv8\" (UID: \"e138835e-4175-41cf-983c-6940600a8d32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326259 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdlhm\" (UniqueName: \"kubernetes.io/projected/b0e0ea5e-92af-42e9-9f96-809c376bcc69-kube-api-access-mdlhm\") pod \"route-controller-manager-6576b87f9c-g6xvs\" (UID: \"b0e0ea5e-92af-42e9-9f96-809c376bcc69\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326288 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/559fe54f-c6f9-4466-b9c8-da6318fc8f59-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326305 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a30d36bf-bb06-4237-9273-eeee9188e931-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326327 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtkck\" (UniqueName: \"kubernetes.io/projected/a30d36bf-bb06-4237-9273-eeee9188e931-kube-api-access-qtkck\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326352 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a30d36bf-bb06-4237-9273-eeee9188e931-audit-dir\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326358 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/db804a3b-9f2e-4638-ae79-7ef21a87104d-audit-dir\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326372 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326391 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326412 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e138835e-4175-41cf-983c-6940600a8d32-config\") pod \"etcd-operator-b45778765-j4dv8\" (UID: \"e138835e-4175-41cf-983c-6940600a8d32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326432 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e138835e-4175-41cf-983c-6940600a8d32-etcd-service-ca\") pod \"etcd-operator-b45778765-j4dv8\" (UID: \"e138835e-4175-41cf-983c-6940600a8d32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326456 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/559fe54f-c6f9-4466-b9c8-da6318fc8f59-audit\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326481 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d988003-748a-4bb4-ac42-c38d41a5295b-service-ca-bundle\") pod \"authentication-operator-69f744f599-jz2fc\" (UID: \"3d988003-748a-4bb4-ac42-c38d41a5295b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jz2fc" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326499 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0e0ea5e-92af-42e9-9f96-809c376bcc69-client-ca\") pod \"route-controller-manager-6576b87f9c-g6xvs\" (UID: \"b0e0ea5e-92af-42e9-9f96-809c376bcc69\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326526 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-console-config\") pod \"console-f9d7485db-n96b6\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326560 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-service-ca\") pod \"console-f9d7485db-n96b6\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326579 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/559fe54f-c6f9-4466-b9c8-da6318fc8f59-audit-dir\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326598 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e138835e-4175-41cf-983c-6940600a8d32-etcd-client\") pod \"etcd-operator-b45778765-j4dv8\" (UID: \"e138835e-4175-41cf-983c-6940600a8d32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326618 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-trusted-ca-bundle\") pod \"console-f9d7485db-n96b6\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326647 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/559fe54f-c6f9-4466-b9c8-da6318fc8f59-config\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326665 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a30d36bf-bb06-4237-9273-eeee9188e931-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326688 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326688 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0e0ea5e-92af-42e9-9f96-809c376bcc69-config\") pod \"route-controller-manager-6576b87f9c-g6xvs\" (UID: \"b0e0ea5e-92af-42e9-9f96-809c376bcc69\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326715 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht6kl\" (UniqueName: \"kubernetes.io/projected/4524185d-72ed-4ff4-be99-2a01cf133dbc-kube-api-access-ht6kl\") pod \"router-default-5444994796-b66lf\" (UID: \"4524185d-72ed-4ff4-be99-2a01cf133dbc\") " pod="openshift-ingress/router-default-5444994796-b66lf" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326688 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/559fe54f-c6f9-4466-b9c8-da6318fc8f59-node-pullsecrets\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326746 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/559fe54f-c6f9-4466-b9c8-da6318fc8f59-etcd-serving-ca\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326771 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e138835e-4175-41cf-983c-6940600a8d32-serving-cert\") pod \"etcd-operator-b45778765-j4dv8\" (UID: \"e138835e-4175-41cf-983c-6940600a8d32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326791 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4524185d-72ed-4ff4-be99-2a01cf133dbc-default-certificate\") pod \"router-default-5444994796-b66lf\" (UID: \"4524185d-72ed-4ff4-be99-2a01cf133dbc\") " pod="openshift-ingress/router-default-5444994796-b66lf" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326812 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc3d5e28-d52f-41d9-8360-faa12e014349-profile-collector-cert\") pod \"olm-operator-6b444d44fb-b5nnl\" (UID: \"cc3d5e28-d52f-41d9-8360-faa12e014349\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b5nnl" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326835 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccx9v\" (UniqueName: \"kubernetes.io/projected/cc3d5e28-d52f-41d9-8360-faa12e014349-kube-api-access-ccx9v\") pod \"olm-operator-6b444d44fb-b5nnl\" (UID: \"cc3d5e28-d52f-41d9-8360-faa12e014349\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b5nnl" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326889 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/559fe54f-c6f9-4466-b9c8-da6318fc8f59-encryption-config\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326910 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed053e72-4999-4b5d-a9f3-c58b92280c8c-console-serving-cert\") pod \"console-f9d7485db-n96b6\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326972 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/68b31442-b3cc-486b-8fd1-e968978c9f1c-images\") pod \"machine-api-operator-5694c8668f-br5s9\" (UID: \"68b31442-b3cc-486b-8fd1-e968978c9f1c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-br5s9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326994 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78w85\" (UniqueName: \"kubernetes.io/projected/559fe54f-c6f9-4466-b9c8-da6318fc8f59-kube-api-access-78w85\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.327355 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.328148 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a30d36bf-bb06-4237-9273-eeee9188e931-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.328310 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a30d36bf-bb06-4237-9273-eeee9188e931-audit-dir\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.328710 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4524185d-72ed-4ff4-be99-2a01cf133dbc-service-ca-bundle\") pod \"router-default-5444994796-b66lf\" (UID: \"4524185d-72ed-4ff4-be99-2a01cf133dbc\") " pod="openshift-ingress/router-default-5444994796-b66lf" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.328908 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a30d36bf-bb06-4237-9273-eeee9188e931-audit-policies\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.329131 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/559fe54f-c6f9-4466-b9c8-da6318fc8f59-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.329757 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-oauth-serving-cert\") pod \"console-f9d7485db-n96b6\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.330376 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-audit-policies\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.330397 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e138835e-4175-41cf-983c-6940600a8d32-etcd-ca\") pod \"etcd-operator-b45778765-j4dv8\" (UID: \"e138835e-4175-41cf-983c-6940600a8d32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.330440 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b31442-b3cc-486b-8fd1-e968978c9f1c-config\") pod \"machine-api-operator-5694c8668f-br5s9\" (UID: \"68b31442-b3cc-486b-8fd1-e968978c9f1c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-br5s9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.330997 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/559fe54f-c6f9-4466-b9c8-da6318fc8f59-image-import-ca\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.332030 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d988003-748a-4bb4-ac42-c38d41a5295b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jz2fc\" (UID: \"3d988003-748a-4bb4-ac42-c38d41a5295b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jz2fc" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.332086 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.332124 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/559fe54f-c6f9-4466-b9c8-da6318fc8f59-config\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.332495 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.332676 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a30d36bf-bb06-4237-9273-eeee9188e931-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.332899 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a30d36bf-bb06-4237-9273-eeee9188e931-encryption-config\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.333245 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0e0ea5e-92af-42e9-9f96-809c376bcc69-client-ca\") pod \"route-controller-manager-6576b87f9c-g6xvs\" (UID: \"b0e0ea5e-92af-42e9-9f96-809c376bcc69\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.333375 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/559fe54f-c6f9-4466-b9c8-da6318fc8f59-audit-dir\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.333794 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/559fe54f-c6f9-4466-b9c8-da6318fc8f59-etcd-serving-ca\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.334260 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-service-ca\") pod \"console-f9d7485db-n96b6\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.335034 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e138835e-4175-41cf-983c-6940600a8d32-etcd-service-ca\") pod \"etcd-operator-b45778765-j4dv8\" (UID: \"e138835e-4175-41cf-983c-6940600a8d32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.335213 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d988003-748a-4bb4-ac42-c38d41a5295b-config\") pod \"authentication-operator-69f744f599-jz2fc\" (UID: \"3d988003-748a-4bb4-ac42-c38d41a5295b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jz2fc" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.335292 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d988003-748a-4bb4-ac42-c38d41a5295b-service-ca-bundle\") pod \"authentication-operator-69f744f599-jz2fc\" (UID: \"3d988003-748a-4bb4-ac42-c38d41a5295b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jz2fc" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.335486 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-console-config\") pod \"console-f9d7485db-n96b6\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.335576 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/559fe54f-c6f9-4466-b9c8-da6318fc8f59-audit\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.326880 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.336712 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc3d5e28-d52f-41d9-8360-faa12e014349-profile-collector-cert\") pod \"olm-operator-6b444d44fb-b5nnl\" (UID: \"cc3d5e28-d52f-41d9-8360-faa12e014349\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b5nnl" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.337568 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-trusted-ca-bundle\") pod \"console-f9d7485db-n96b6\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.337629 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/68b31442-b3cc-486b-8fd1-e968978c9f1c-images\") pod \"machine-api-operator-5694c8668f-br5s9\" (UID: \"68b31442-b3cc-486b-8fd1-e968978c9f1c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-br5s9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.338300 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/559fe54f-c6f9-4466-b9c8-da6318fc8f59-etcd-client\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.338816 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e138835e-4175-41cf-983c-6940600a8d32-etcd-client\") pod \"etcd-operator-b45778765-j4dv8\" (UID: \"e138835e-4175-41cf-983c-6940600a8d32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.339000 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.339167 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e138835e-4175-41cf-983c-6940600a8d32-serving-cert\") pod \"etcd-operator-b45778765-j4dv8\" (UID: \"e138835e-4175-41cf-983c-6940600a8d32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.339233 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed053e72-4999-4b5d-a9f3-c58b92280c8c-console-serving-cert\") pod \"console-f9d7485db-n96b6\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.339611 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e138835e-4175-41cf-983c-6940600a8d32-config\") pod \"etcd-operator-b45778765-j4dv8\" (UID: \"e138835e-4175-41cf-983c-6940600a8d32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.340523 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.340599 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.340650 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed053e72-4999-4b5d-a9f3-c58b92280c8c-console-oauth-config\") pod \"console-f9d7485db-n96b6\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.341139 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.341315 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/559fe54f-c6f9-4466-b9c8-da6318fc8f59-serving-cert\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.341325 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4524185d-72ed-4ff4-be99-2a01cf133dbc-stats-auth\") pod \"router-default-5444994796-b66lf\" (UID: \"4524185d-72ed-4ff4-be99-2a01cf133dbc\") " pod="openshift-ingress/router-default-5444994796-b66lf" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.342646 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc3d5e28-d52f-41d9-8360-faa12e014349-srv-cert\") pod \"olm-operator-6b444d44fb-b5nnl\" (UID: \"cc3d5e28-d52f-41d9-8360-faa12e014349\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b5nnl" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.342739 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a30d36bf-bb06-4237-9273-eeee9188e931-serving-cert\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.342757 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d988003-748a-4bb4-ac42-c38d41a5295b-serving-cert\") pod \"authentication-operator-69f744f599-jz2fc\" (UID: \"3d988003-748a-4bb4-ac42-c38d41a5295b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jz2fc" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.342857 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/68b31442-b3cc-486b-8fd1-e968978c9f1c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-br5s9\" (UID: \"68b31442-b3cc-486b-8fd1-e968978c9f1c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-br5s9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.343022 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.343036 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.343035 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a30d36bf-bb06-4237-9273-eeee9188e931-etcd-client\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.343326 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0e0ea5e-92af-42e9-9f96-809c376bcc69-serving-cert\") pod \"route-controller-manager-6576b87f9c-g6xvs\" (UID: \"b0e0ea5e-92af-42e9-9f96-809c376bcc69\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.343350 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4524185d-72ed-4ff4-be99-2a01cf133dbc-default-certificate\") pod \"router-default-5444994796-b66lf\" (UID: \"4524185d-72ed-4ff4-be99-2a01cf133dbc\") " pod="openshift-ingress/router-default-5444994796-b66lf" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.343872 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.344498 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/559fe54f-c6f9-4466-b9c8-da6318fc8f59-encryption-config\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.344836 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4524185d-72ed-4ff4-be99-2a01cf133dbc-metrics-certs\") pod \"router-default-5444994796-b66lf\" (UID: \"4524185d-72ed-4ff4-be99-2a01cf133dbc\") " pod="openshift-ingress/router-default-5444994796-b66lf" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.345563 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.363278 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.384148 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.403404 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.423666 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.442997 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.451853 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4gt6"] Feb 16 17:02:16 crc kubenswrapper[4870]: W0216 17:02:16.458290 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod775a2c57_6603_456c_90f8_13116521d18d.slice/crio-948f7c911f36f1fda1aa015828251d5a8c46f858ee50a7475ed098ec59d0bd41 WatchSource:0}: Error finding container 948f7c911f36f1fda1aa015828251d5a8c46f858ee50a7475ed098ec59d0bd41: Status 404 returned error can't find the container with id 948f7c911f36f1fda1aa015828251d5a8c46f858ee50a7475ed098ec59d0bd41 Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.464543 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.484004 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.503291 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.543498 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.562720 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.582287 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.603645 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.623435 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.643729 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.663867 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.683825 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.704716 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.724008 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.744826 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.762721 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.784209 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.803491 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.823481 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.843450 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.871014 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.883596 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.903549 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.924008 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.943472 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.964436 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 17:02:16 crc kubenswrapper[4870]: I0216 17:02:16.982939 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.008883 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.011699 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4gt6" event={"ID":"775a2c57-6603-456c-90f8-13116521d18d","Type":"ContainerStarted","Data":"d9d7ea15b83441403450a3a6d2c8c38ef87cf408409e7f6c211105a5f4347880"} Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.011760 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4gt6" event={"ID":"775a2c57-6603-456c-90f8-13116521d18d","Type":"ContainerStarted","Data":"948f7c911f36f1fda1aa015828251d5a8c46f858ee50a7475ed098ec59d0bd41"} Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.023086 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.042974 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.063680 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.083902 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.104629 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.133444 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.144566 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.160973 4870 request.go:700] Waited for 1.004328089s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&limit=500&resourceVersion=0 Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.163798 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.182725 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.203796 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.223226 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.243738 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.264650 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.303080 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.323471 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.341421 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e6bf0f44-e205-4b3c-8360-a9578c67459f-registry-tls\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.341652 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e6bf0f44-e205-4b3c-8360-a9578c67459f-registry-certificates\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.341727 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.341817 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e6bf0f44-e205-4b3c-8360-a9578c67459f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.341844 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6bf0f44-e205-4b3c-8360-a9578c67459f-bound-sa-token\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.342009 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e6bf0f44-e205-4b3c-8360-a9578c67459f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.342094 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6bf0f44-e205-4b3c-8360-a9578c67459f-trusted-ca\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.342199 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdd2g\" (UniqueName: \"kubernetes.io/projected/e6bf0f44-e205-4b3c-8360-a9578c67459f-kube-api-access-tdd2g\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: E0216 17:02:17.342486 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:17.84245838 +0000 UTC m=+142.325922994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.343610 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.363927 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.383662 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.403416 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.423803 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.442928 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:17 crc kubenswrapper[4870]: E0216 17:02:17.443099 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:17.943066443 +0000 UTC m=+142.426530817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.443259 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv8gd\" (UniqueName: \"kubernetes.io/projected/7f83f0b7-935c-4972-83c7-bd9a2d95afc1-kube-api-access-sv8gd\") pod \"cluster-image-registry-operator-dc59b4c8b-4k6sf\" (UID: \"7f83f0b7-935c-4972-83c7-bd9a2d95afc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4k6sf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.443292 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8779ed51-68c4-4fc0-8e83-994215a16ba0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4nxwd\" (UID: \"8779ed51-68c4-4fc0-8e83-994215a16ba0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4nxwd" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.443313 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6btb\" (UniqueName: \"kubernetes.io/projected/05a3a12e-c6b1-4ff3-9926-c9bb85192b03-kube-api-access-q6btb\") pod \"multus-admission-controller-857f4d67dd-zgkhs\" (UID: \"05a3a12e-c6b1-4ff3-9926-c9bb85192b03\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zgkhs" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.443332 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxkph\" (UniqueName: \"kubernetes.io/projected/986bb24c-5a0b-4fe9-bd99-e48c3477cc45-kube-api-access-wxkph\") pod \"packageserver-d55dfcdfc-6v8zp\" (UID: \"986bb24c-5a0b-4fe9-bd99-e48c3477cc45\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.443415 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9-serving-cert\") pod \"console-operator-58897d9998-8q7hf\" (UID: \"6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9\") " pod="openshift-console-operator/console-operator-58897d9998-8q7hf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.443433 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/00f06e07-b382-4409-a7af-cd84abf48e99-proxy-tls\") pod \"machine-config-operator-74547568cd-pnznr\" (UID: \"00f06e07-b382-4409-a7af-cd84abf48e99\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pnznr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.443450 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnmjl\" (UniqueName: \"kubernetes.io/projected/db7abdfa-44a0-4c7b-b314-bec98e87552d-kube-api-access-tnmjl\") pod \"dns-default-k28wm\" (UID: \"db7abdfa-44a0-4c7b-b314-bec98e87552d\") " pod="openshift-dns/dns-default-k28wm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.443511 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad671622-917b-4e62-a887-d2d6e0935f2e-secret-volume\") pod \"collect-profiles-29521020-9sztf\" (UID: \"ad671622-917b-4e62-a887-d2d6e0935f2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.443538 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08d93167-bc2f-4032-9840-f5eda9916ddd-srv-cert\") pod \"catalog-operator-68c6474976-jxjrh\" (UID: \"08d93167-bc2f-4032-9840-f5eda9916ddd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.443582 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca7e9d14-d778-46fa-bbd4-326a1cf28a38-cert\") pod \"ingress-canary-g78th\" (UID: \"ca7e9d14-d778-46fa-bbd4-326a1cf28a38\") " pod="openshift-ingress-canary/ingress-canary-g78th" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.443613 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t7w5\" (UniqueName: \"kubernetes.io/projected/ad671622-917b-4e62-a887-d2d6e0935f2e-kube-api-access-8t7w5\") pod \"collect-profiles-29521020-9sztf\" (UID: \"ad671622-917b-4e62-a887-d2d6e0935f2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.443673 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d353c23-d497-4fb3-8672-88f6cb2734d4-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jfvff\" (UID: \"1d353c23-d497-4fb3-8672-88f6cb2734d4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jfvff" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.443735 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsf24\" (UniqueName: \"kubernetes.io/projected/00f06e07-b382-4409-a7af-cd84abf48e99-kube-api-access-nsf24\") pod \"machine-config-operator-74547568cd-pnznr\" (UID: \"00f06e07-b382-4409-a7af-cd84abf48e99\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pnznr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.443780 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e6bf0f44-e205-4b3c-8360-a9578c67459f-registry-certificates\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.443880 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d353c23-d497-4fb3-8672-88f6cb2734d4-trusted-ca\") pod \"ingress-operator-5b745b69d9-jfvff\" (UID: \"1d353c23-d497-4fb3-8672-88f6cb2734d4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jfvff" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.443908 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d065b0a1-56ea-4bb1-aef7-9d9f46a46a42-node-bootstrap-token\") pod \"machine-config-server-fthnr\" (UID: \"d065b0a1-56ea-4bb1-aef7-9d9f46a46a42\") " pod="openshift-machine-config-operator/machine-config-server-fthnr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.443931 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.443934 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/986bb24c-5a0b-4fe9-bd99-e48c3477cc45-apiservice-cert\") pod \"packageserver-d55dfcdfc-6v8zp\" (UID: \"986bb24c-5a0b-4fe9-bd99-e48c3477cc45\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.444149 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-q2xvl\" (UID: \"d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.444178 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn6dq\" (UniqueName: \"kubernetes.io/projected/e3852642-f948-4814-8fbd-04301eb7b9c1-kube-api-access-pn6dq\") pod \"kube-storage-version-migrator-operator-b67b599dd-jxk8b\" (UID: \"e3852642-f948-4814-8fbd-04301eb7b9c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jxk8b" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.444200 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb6qr\" (UniqueName: \"kubernetes.io/projected/ca7e9d14-d778-46fa-bbd4-326a1cf28a38-kube-api-access-kb6qr\") pod \"ingress-canary-g78th\" (UID: \"ca7e9d14-d778-46fa-bbd4-326a1cf28a38\") " pod="openshift-ingress-canary/ingress-canary-g78th" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.444232 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/73e4dd57-ca25-46e2-9afb-a67a8d339e67-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-v25cw\" (UID: \"73e4dd57-ca25-46e2-9afb-a67a8d339e67\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v25cw" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.444263 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0ae17602-ebe6-41e2-9241-c0552e6a4e7e-metrics-tls\") pod \"dns-operator-744455d44c-8rbb8\" (UID: \"0ae17602-ebe6-41e2-9241-c0552e6a4e7e\") " pod="openshift-dns-operator/dns-operator-744455d44c-8rbb8" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.444286 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmxqn\" (UniqueName: \"kubernetes.io/projected/8779ed51-68c4-4fc0-8e83-994215a16ba0-kube-api-access-lmxqn\") pod \"openshift-controller-manager-operator-756b6f6bc6-4nxwd\" (UID: \"8779ed51-68c4-4fc0-8e83-994215a16ba0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4nxwd" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.444317 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.444387 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/05a3a12e-c6b1-4ff3-9926-c9bb85192b03-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zgkhs\" (UID: \"05a3a12e-c6b1-4ff3-9926-c9bb85192b03\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zgkhs" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.444412 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9jrr\" (UniqueName: \"kubernetes.io/projected/d065b0a1-56ea-4bb1-aef7-9d9f46a46a42-kube-api-access-r9jrr\") pod \"machine-config-server-fthnr\" (UID: \"d065b0a1-56ea-4bb1-aef7-9d9f46a46a42\") " pod="openshift-machine-config-operator/machine-config-server-fthnr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.444431 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/46ad0255-25ad-4b1b-9a88-3a2c8f3eb302-plugins-dir\") pod \"csi-hostpathplugin-8sggp\" (UID: \"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302\") " pod="hostpath-provisioner/csi-hostpathplugin-8sggp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.444477 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8xf9\" (UniqueName: \"kubernetes.io/projected/fa1008c7-de78-4cc4-93d1-b6b22198a05a-kube-api-access-w8xf9\") pod \"control-plane-machine-set-operator-78cbb6b69f-bn28c\" (UID: \"fa1008c7-de78-4cc4-93d1-b6b22198a05a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bn28c" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.444505 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00f06e07-b382-4409-a7af-cd84abf48e99-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pnznr\" (UID: \"00f06e07-b382-4409-a7af-cd84abf48e99\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pnznr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.444522 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/46ad0255-25ad-4b1b-9a88-3a2c8f3eb302-csi-data-dir\") pod \"csi-hostpathplugin-8sggp\" (UID: \"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302\") " pod="hostpath-provisioner/csi-hostpathplugin-8sggp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.444561 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fj5k\" (UniqueName: \"kubernetes.io/projected/b0166ac5-5759-4298-a49b-6a67d179944e-kube-api-access-7fj5k\") pod \"service-ca-9c57cc56f-9d2ls\" (UID: \"b0166ac5-5759-4298-a49b-6a67d179944e\") " pod="openshift-service-ca/service-ca-9c57cc56f-9d2ls" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.444586 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjdph\" (UniqueName: \"kubernetes.io/projected/4634141f-b890-48f3-b6c7-d8a730ff29b5-kube-api-access-xjdph\") pod \"service-ca-operator-777779d784-f4pwk\" (UID: \"4634141f-b890-48f3-b6c7-d8a730ff29b5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f4pwk" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.444607 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3852642-f948-4814-8fbd-04301eb7b9c1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jxk8b\" (UID: \"e3852642-f948-4814-8fbd-04301eb7b9c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jxk8b" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.444634 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e6bf0f44-e205-4b3c-8360-a9578c67459f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.444654 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18408396-529f-4df8-8c25-4c483ea6d203-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8df2c\" (UID: \"18408396-529f-4df8-8c25-4c483ea6d203\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8df2c" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.444684 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9kfp\" (UniqueName: \"kubernetes.io/projected/5dc49196-10d0-4e90-8523-8f0d055c5800-kube-api-access-p9kfp\") pod \"cluster-samples-operator-665b6dd947-rpzss\" (UID: \"5dc49196-10d0-4e90-8523-8f0d055c5800\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rpzss" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.444705 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/08d93167-bc2f-4032-9840-f5eda9916ddd-profile-collector-cert\") pod \"catalog-operator-68c6474976-jxjrh\" (UID: \"08d93167-bc2f-4032-9840-f5eda9916ddd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.444731 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7gg4\" (UniqueName: \"kubernetes.io/projected/1d353c23-d497-4fb3-8672-88f6cb2734d4-kube-api-access-m7gg4\") pod \"ingress-operator-5b745b69d9-jfvff\" (UID: \"1d353c23-d497-4fb3-8672-88f6cb2734d4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jfvff" Feb 16 17:02:17 crc kubenswrapper[4870]: E0216 17:02:17.445656 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:17.945635536 +0000 UTC m=+142.429099920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.445757 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f83f0b7-935c-4972-83c7-bd9a2d95afc1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4k6sf\" (UID: \"7f83f0b7-935c-4972-83c7-bd9a2d95afc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4k6sf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.446005 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e6bf0f44-e205-4b3c-8360-a9578c67459f-registry-certificates\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.446282 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f83f0b7-935c-4972-83c7-bd9a2d95afc1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4k6sf\" (UID: \"7f83f0b7-935c-4972-83c7-bd9a2d95afc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4k6sf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.446424 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad671622-917b-4e62-a887-d2d6e0935f2e-config-volume\") pod \"collect-profiles-29521020-9sztf\" (UID: \"ad671622-917b-4e62-a887-d2d6e0935f2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.446557 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dc49196-10d0-4e90-8523-8f0d055c5800-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rpzss\" (UID: \"5dc49196-10d0-4e90-8523-8f0d055c5800\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rpzss" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.446628 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdd2g\" (UniqueName: \"kubernetes.io/projected/e6bf0f44-e205-4b3c-8360-a9578c67459f-kube-api-access-tdd2g\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.446680 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/46ad0255-25ad-4b1b-9a88-3a2c8f3eb302-socket-dir\") pod \"csi-hostpathplugin-8sggp\" (UID: \"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302\") " pod="hostpath-provisioner/csi-hostpathplugin-8sggp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.447198 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d9ed0cdf-88f2-42cd-93e9-22517410ca31-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4jpbt\" (UID: \"d9ed0cdf-88f2-42cd-93e9-22517410ca31\") " pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.447272 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18408396-529f-4df8-8c25-4c483ea6d203-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8df2c\" (UID: \"18408396-529f-4df8-8c25-4c483ea6d203\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8df2c" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.447498 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b0166ac5-5759-4298-a49b-6a67d179944e-signing-key\") pod \"service-ca-9c57cc56f-9d2ls\" (UID: \"b0166ac5-5759-4298-a49b-6a67d179944e\") " pod="openshift-service-ca/service-ca-9c57cc56f-9d2ls" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.447628 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/46ad0255-25ad-4b1b-9a88-3a2c8f3eb302-registration-dir\") pod \"csi-hostpathplugin-8sggp\" (UID: \"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302\") " pod="hostpath-provisioner/csi-hostpathplugin-8sggp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.447697 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9-config\") pod \"console-operator-58897d9998-8q7hf\" (UID: \"6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9\") " pod="openshift-console-operator/console-operator-58897d9998-8q7hf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.447761 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4-serving-cert\") pod \"openshift-config-operator-7777fb866f-q2xvl\" (UID: \"d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.447793 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b0166ac5-5759-4298-a49b-6a67d179944e-signing-cabundle\") pod \"service-ca-9c57cc56f-9d2ls\" (UID: \"b0166ac5-5759-4298-a49b-6a67d179944e\") " pod="openshift-service-ca/service-ca-9c57cc56f-9d2ls" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.448031 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7a20c9f-7be2-422e-bb13-de026cae08f7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m4tjr\" (UID: \"e7a20c9f-7be2-422e-bb13-de026cae08f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m4tjr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.448543 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rm7p\" (UniqueName: \"kubernetes.io/projected/46ad0255-25ad-4b1b-9a88-3a2c8f3eb302-kube-api-access-9rm7p\") pod \"csi-hostpathplugin-8sggp\" (UID: \"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302\") " pod="hostpath-provisioner/csi-hostpathplugin-8sggp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.448837 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e6bf0f44-e205-4b3c-8360-a9578c67459f-registry-tls\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.448881 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63e96b44-624a-4d42-b63e-22506f5bd250-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-tcchp\" (UID: \"63e96b44-624a-4d42-b63e-22506f5bd250\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tcchp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.448902 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jtwj\" (UniqueName: \"kubernetes.io/projected/08d93167-bc2f-4032-9840-f5eda9916ddd-kube-api-access-7jtwj\") pod \"catalog-operator-68c6474976-jxjrh\" (UID: \"08d93167-bc2f-4032-9840-f5eda9916ddd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.448939 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qrh9\" (UniqueName: \"kubernetes.io/projected/d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4-kube-api-access-2qrh9\") pod \"openshift-config-operator-7777fb866f-q2xvl\" (UID: \"d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.450397 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f83f0b7-935c-4972-83c7-bd9a2d95afc1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4k6sf\" (UID: \"7f83f0b7-935c-4972-83c7-bd9a2d95afc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4k6sf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.450437 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a39ee5b4-a248-4d15-95c3-bf801fe2c6de-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lv95l\" (UID: \"a39ee5b4-a248-4d15-95c3-bf801fe2c6de\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lv95l" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.450776 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d353c23-d497-4fb3-8672-88f6cb2734d4-metrics-tls\") pod \"ingress-operator-5b745b69d9-jfvff\" (UID: \"1d353c23-d497-4fb3-8672-88f6cb2734d4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jfvff" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.451030 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p48z4\" (UniqueName: \"kubernetes.io/projected/d9ed0cdf-88f2-42cd-93e9-22517410ca31-kube-api-access-p48z4\") pod \"marketplace-operator-79b997595-4jpbt\" (UID: \"d9ed0cdf-88f2-42cd-93e9-22517410ca31\") " pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.451432 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2jwv\" (UniqueName: \"kubernetes.io/projected/d2caf342-1317-477b-bf32-eb860c7395c8-kube-api-access-w2jwv\") pod \"migrator-59844c95c7-rqjhl\" (UID: \"d2caf342-1317-477b-bf32-eb860c7395c8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rqjhl" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.452139 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e6bf0f44-e205-4b3c-8360-a9578c67459f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.452668 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db7abdfa-44a0-4c7b-b314-bec98e87552d-config-volume\") pod \"dns-default-k28wm\" (UID: \"db7abdfa-44a0-4c7b-b314-bec98e87552d\") " pod="openshift-dns/dns-default-k28wm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.452769 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/46ad0255-25ad-4b1b-9a88-3a2c8f3eb302-mountpoint-dir\") pod \"csi-hostpathplugin-8sggp\" (UID: \"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302\") " pod="hostpath-provisioner/csi-hostpathplugin-8sggp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.452833 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frdhg\" (UniqueName: \"kubernetes.io/projected/6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9-kube-api-access-frdhg\") pod \"console-operator-58897d9998-8q7hf\" (UID: \"6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9\") " pod="openshift-console-operator/console-operator-58897d9998-8q7hf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.452870 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa1008c7-de78-4cc4-93d1-b6b22198a05a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bn28c\" (UID: \"fa1008c7-de78-4cc4-93d1-b6b22198a05a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bn28c" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.452897 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db7abdfa-44a0-4c7b-b314-bec98e87552d-metrics-tls\") pod \"dns-default-k28wm\" (UID: \"db7abdfa-44a0-4c7b-b314-bec98e87552d\") " pod="openshift-dns/dns-default-k28wm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.453025 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4634141f-b890-48f3-b6c7-d8a730ff29b5-serving-cert\") pod \"service-ca-operator-777779d784-f4pwk\" (UID: \"4634141f-b890-48f3-b6c7-d8a730ff29b5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f4pwk" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.453084 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/986bb24c-5a0b-4fe9-bd99-e48c3477cc45-tmpfs\") pod \"packageserver-d55dfcdfc-6v8zp\" (UID: \"986bb24c-5a0b-4fe9-bd99-e48c3477cc45\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.453118 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9-trusted-ca\") pod \"console-operator-58897d9998-8q7hf\" (UID: \"6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9\") " pod="openshift-console-operator/console-operator-58897d9998-8q7hf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.453146 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9ed0cdf-88f2-42cd-93e9-22517410ca31-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4jpbt\" (UID: \"d9ed0cdf-88f2-42cd-93e9-22517410ca31\") " pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.453173 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f2sl\" (UniqueName: \"kubernetes.io/projected/73e4dd57-ca25-46e2-9afb-a67a8d339e67-kube-api-access-2f2sl\") pod \"package-server-manager-789f6589d5-v25cw\" (UID: \"73e4dd57-ca25-46e2-9afb-a67a8d339e67\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v25cw" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.453247 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4634141f-b890-48f3-b6c7-d8a730ff29b5-config\") pod \"service-ca-operator-777779d784-f4pwk\" (UID: \"4634141f-b890-48f3-b6c7-d8a730ff29b5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f4pwk" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.453273 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63e96b44-624a-4d42-b63e-22506f5bd250-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-tcchp\" (UID: \"63e96b44-624a-4d42-b63e-22506f5bd250\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tcchp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.453299 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj8fh\" (UniqueName: \"kubernetes.io/projected/0ae17602-ebe6-41e2-9241-c0552e6a4e7e-kube-api-access-xj8fh\") pod \"dns-operator-744455d44c-8rbb8\" (UID: \"0ae17602-ebe6-41e2-9241-c0552e6a4e7e\") " pod="openshift-dns-operator/dns-operator-744455d44c-8rbb8" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.453346 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6bf0f44-e205-4b3c-8360-a9578c67459f-bound-sa-token\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.453396 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7a20c9f-7be2-422e-bb13-de026cae08f7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m4tjr\" (UID: \"e7a20c9f-7be2-422e-bb13-de026cae08f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m4tjr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.453422 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8779ed51-68c4-4fc0-8e83-994215a16ba0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4nxwd\" (UID: \"8779ed51-68c4-4fc0-8e83-994215a16ba0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4nxwd" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.453453 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7a20c9f-7be2-422e-bb13-de026cae08f7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m4tjr\" (UID: \"e7a20c9f-7be2-422e-bb13-de026cae08f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m4tjr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.453496 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e6bf0f44-e205-4b3c-8360-a9578c67459f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.453847 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e6bf0f44-e205-4b3c-8360-a9578c67459f-registry-tls\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.454265 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6bf0f44-e205-4b3c-8360-a9578c67459f-trusted-ca\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.454399 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d065b0a1-56ea-4bb1-aef7-9d9f46a46a42-certs\") pod \"machine-config-server-fthnr\" (UID: \"d065b0a1-56ea-4bb1-aef7-9d9f46a46a42\") " pod="openshift-machine-config-operator/machine-config-server-fthnr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.454525 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e6bf0f44-e205-4b3c-8360-a9578c67459f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.454599 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a39ee5b4-a248-4d15-95c3-bf801fe2c6de-proxy-tls\") pod \"machine-config-controller-84d6567774-lv95l\" (UID: \"a39ee5b4-a248-4d15-95c3-bf801fe2c6de\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lv95l" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.454716 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvm5q\" (UniqueName: \"kubernetes.io/projected/a39ee5b4-a248-4d15-95c3-bf801fe2c6de-kube-api-access-bvm5q\") pod \"machine-config-controller-84d6567774-lv95l\" (UID: \"a39ee5b4-a248-4d15-95c3-bf801fe2c6de\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lv95l" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.454885 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3852642-f948-4814-8fbd-04301eb7b9c1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jxk8b\" (UID: \"e3852642-f948-4814-8fbd-04301eb7b9c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jxk8b" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.455004 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/986bb24c-5a0b-4fe9-bd99-e48c3477cc45-webhook-cert\") pod \"packageserver-d55dfcdfc-6v8zp\" (UID: \"986bb24c-5a0b-4fe9-bd99-e48c3477cc45\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.455107 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63e96b44-624a-4d42-b63e-22506f5bd250-config\") pod \"kube-apiserver-operator-766d6c64bb-tcchp\" (UID: \"63e96b44-624a-4d42-b63e-22506f5bd250\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tcchp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.455182 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/00f06e07-b382-4409-a7af-cd84abf48e99-images\") pod \"machine-config-operator-74547568cd-pnznr\" (UID: \"00f06e07-b382-4409-a7af-cd84abf48e99\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pnznr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.455300 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18408396-529f-4df8-8c25-4c483ea6d203-config\") pod \"kube-controller-manager-operator-78b949d7b-8df2c\" (UID: \"18408396-529f-4df8-8c25-4c483ea6d203\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8df2c" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.456730 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6bf0f44-e205-4b3c-8360-a9578c67459f-trusted-ca\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.463830 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.492250 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.503486 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.523209 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.543746 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.556460 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:17 crc kubenswrapper[4870]: E0216 17:02:17.556736 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:18.056677006 +0000 UTC m=+142.540141400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.556862 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9kfp\" (UniqueName: \"kubernetes.io/projected/5dc49196-10d0-4e90-8523-8f0d055c5800-kube-api-access-p9kfp\") pod \"cluster-samples-operator-665b6dd947-rpzss\" (UID: \"5dc49196-10d0-4e90-8523-8f0d055c5800\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rpzss" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.556903 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/08d93167-bc2f-4032-9840-f5eda9916ddd-profile-collector-cert\") pod \"catalog-operator-68c6474976-jxjrh\" (UID: \"08d93167-bc2f-4032-9840-f5eda9916ddd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.556938 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7gg4\" (UniqueName: \"kubernetes.io/projected/1d353c23-d497-4fb3-8672-88f6cb2734d4-kube-api-access-m7gg4\") pod \"ingress-operator-5b745b69d9-jfvff\" (UID: \"1d353c23-d497-4fb3-8672-88f6cb2734d4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jfvff" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.557040 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f83f0b7-935c-4972-83c7-bd9a2d95afc1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4k6sf\" (UID: \"7f83f0b7-935c-4972-83c7-bd9a2d95afc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4k6sf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.557076 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f83f0b7-935c-4972-83c7-bd9a2d95afc1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4k6sf\" (UID: \"7f83f0b7-935c-4972-83c7-bd9a2d95afc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4k6sf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.557102 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad671622-917b-4e62-a887-d2d6e0935f2e-config-volume\") pod \"collect-profiles-29521020-9sztf\" (UID: \"ad671622-917b-4e62-a887-d2d6e0935f2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.557135 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dc49196-10d0-4e90-8523-8f0d055c5800-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rpzss\" (UID: \"5dc49196-10d0-4e90-8523-8f0d055c5800\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rpzss" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.557167 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/46ad0255-25ad-4b1b-9a88-3a2c8f3eb302-socket-dir\") pod \"csi-hostpathplugin-8sggp\" (UID: \"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302\") " pod="hostpath-provisioner/csi-hostpathplugin-8sggp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.557190 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d9ed0cdf-88f2-42cd-93e9-22517410ca31-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4jpbt\" (UID: \"d9ed0cdf-88f2-42cd-93e9-22517410ca31\") " pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.557214 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18408396-529f-4df8-8c25-4c483ea6d203-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8df2c\" (UID: \"18408396-529f-4df8-8c25-4c483ea6d203\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8df2c" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.557241 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b0166ac5-5759-4298-a49b-6a67d179944e-signing-key\") pod \"service-ca-9c57cc56f-9d2ls\" (UID: \"b0166ac5-5759-4298-a49b-6a67d179944e\") " pod="openshift-service-ca/service-ca-9c57cc56f-9d2ls" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.557270 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/46ad0255-25ad-4b1b-9a88-3a2c8f3eb302-registration-dir\") pod \"csi-hostpathplugin-8sggp\" (UID: \"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302\") " pod="hostpath-provisioner/csi-hostpathplugin-8sggp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.557292 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9-config\") pod \"console-operator-58897d9998-8q7hf\" (UID: \"6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9\") " pod="openshift-console-operator/console-operator-58897d9998-8q7hf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.557312 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4-serving-cert\") pod \"openshift-config-operator-7777fb866f-q2xvl\" (UID: \"d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.557331 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b0166ac5-5759-4298-a49b-6a67d179944e-signing-cabundle\") pod \"service-ca-9c57cc56f-9d2ls\" (UID: \"b0166ac5-5759-4298-a49b-6a67d179944e\") " pod="openshift-service-ca/service-ca-9c57cc56f-9d2ls" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.557359 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7a20c9f-7be2-422e-bb13-de026cae08f7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m4tjr\" (UID: \"e7a20c9f-7be2-422e-bb13-de026cae08f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m4tjr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.557388 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rm7p\" (UniqueName: \"kubernetes.io/projected/46ad0255-25ad-4b1b-9a88-3a2c8f3eb302-kube-api-access-9rm7p\") pod \"csi-hostpathplugin-8sggp\" (UID: \"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302\") " pod="hostpath-provisioner/csi-hostpathplugin-8sggp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.557412 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63e96b44-624a-4d42-b63e-22506f5bd250-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-tcchp\" (UID: \"63e96b44-624a-4d42-b63e-22506f5bd250\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tcchp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.557437 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jtwj\" (UniqueName: \"kubernetes.io/projected/08d93167-bc2f-4032-9840-f5eda9916ddd-kube-api-access-7jtwj\") pod \"catalog-operator-68c6474976-jxjrh\" (UID: \"08d93167-bc2f-4032-9840-f5eda9916ddd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.558050 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qrh9\" (UniqueName: \"kubernetes.io/projected/d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4-kube-api-access-2qrh9\") pod \"openshift-config-operator-7777fb866f-q2xvl\" (UID: \"d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.558172 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f83f0b7-935c-4972-83c7-bd9a2d95afc1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4k6sf\" (UID: \"7f83f0b7-935c-4972-83c7-bd9a2d95afc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4k6sf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.558217 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a39ee5b4-a248-4d15-95c3-bf801fe2c6de-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lv95l\" (UID: \"a39ee5b4-a248-4d15-95c3-bf801fe2c6de\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lv95l" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.558467 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/46ad0255-25ad-4b1b-9a88-3a2c8f3eb302-socket-dir\") pod \"csi-hostpathplugin-8sggp\" (UID: \"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302\") " pod="hostpath-provisioner/csi-hostpathplugin-8sggp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.558782 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/46ad0255-25ad-4b1b-9a88-3a2c8f3eb302-registration-dir\") pod \"csi-hostpathplugin-8sggp\" (UID: \"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302\") " pod="hostpath-provisioner/csi-hostpathplugin-8sggp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.559026 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d353c23-d497-4fb3-8672-88f6cb2734d4-metrics-tls\") pod \"ingress-operator-5b745b69d9-jfvff\" (UID: \"1d353c23-d497-4fb3-8672-88f6cb2734d4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jfvff" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.559131 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p48z4\" (UniqueName: \"kubernetes.io/projected/d9ed0cdf-88f2-42cd-93e9-22517410ca31-kube-api-access-p48z4\") pod \"marketplace-operator-79b997595-4jpbt\" (UID: \"d9ed0cdf-88f2-42cd-93e9-22517410ca31\") " pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.559173 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b0166ac5-5759-4298-a49b-6a67d179944e-signing-cabundle\") pod \"service-ca-9c57cc56f-9d2ls\" (UID: \"b0166ac5-5759-4298-a49b-6a67d179944e\") " pod="openshift-service-ca/service-ca-9c57cc56f-9d2ls" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.559191 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2jwv\" (UniqueName: \"kubernetes.io/projected/d2caf342-1317-477b-bf32-eb860c7395c8-kube-api-access-w2jwv\") pod \"migrator-59844c95c7-rqjhl\" (UID: \"d2caf342-1317-477b-bf32-eb860c7395c8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rqjhl" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.559542 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db7abdfa-44a0-4c7b-b314-bec98e87552d-config-volume\") pod \"dns-default-k28wm\" (UID: \"db7abdfa-44a0-4c7b-b314-bec98e87552d\") " pod="openshift-dns/dns-default-k28wm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.559604 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/46ad0255-25ad-4b1b-9a88-3a2c8f3eb302-mountpoint-dir\") pod \"csi-hostpathplugin-8sggp\" (UID: \"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302\") " pod="hostpath-provisioner/csi-hostpathplugin-8sggp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.559636 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frdhg\" (UniqueName: \"kubernetes.io/projected/6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9-kube-api-access-frdhg\") pod \"console-operator-58897d9998-8q7hf\" (UID: \"6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9\") " pod="openshift-console-operator/console-operator-58897d9998-8q7hf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.559665 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa1008c7-de78-4cc4-93d1-b6b22198a05a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bn28c\" (UID: \"fa1008c7-de78-4cc4-93d1-b6b22198a05a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bn28c" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.559695 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db7abdfa-44a0-4c7b-b314-bec98e87552d-metrics-tls\") pod \"dns-default-k28wm\" (UID: \"db7abdfa-44a0-4c7b-b314-bec98e87552d\") " pod="openshift-dns/dns-default-k28wm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.559736 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4634141f-b890-48f3-b6c7-d8a730ff29b5-serving-cert\") pod \"service-ca-operator-777779d784-f4pwk\" (UID: \"4634141f-b890-48f3-b6c7-d8a730ff29b5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f4pwk" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.559759 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/986bb24c-5a0b-4fe9-bd99-e48c3477cc45-tmpfs\") pod \"packageserver-d55dfcdfc-6v8zp\" (UID: \"986bb24c-5a0b-4fe9-bd99-e48c3477cc45\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.559787 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9-trusted-ca\") pod \"console-operator-58897d9998-8q7hf\" (UID: \"6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9\") " pod="openshift-console-operator/console-operator-58897d9998-8q7hf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.559811 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9ed0cdf-88f2-42cd-93e9-22517410ca31-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4jpbt\" (UID: \"d9ed0cdf-88f2-42cd-93e9-22517410ca31\") " pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.559835 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f2sl\" (UniqueName: \"kubernetes.io/projected/73e4dd57-ca25-46e2-9afb-a67a8d339e67-kube-api-access-2f2sl\") pod \"package-server-manager-789f6589d5-v25cw\" (UID: \"73e4dd57-ca25-46e2-9afb-a67a8d339e67\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v25cw" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.559874 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4634141f-b890-48f3-b6c7-d8a730ff29b5-config\") pod \"service-ca-operator-777779d784-f4pwk\" (UID: \"4634141f-b890-48f3-b6c7-d8a730ff29b5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f4pwk" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.559904 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63e96b44-624a-4d42-b63e-22506f5bd250-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-tcchp\" (UID: \"63e96b44-624a-4d42-b63e-22506f5bd250\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tcchp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.559931 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj8fh\" (UniqueName: \"kubernetes.io/projected/0ae17602-ebe6-41e2-9241-c0552e6a4e7e-kube-api-access-xj8fh\") pod \"dns-operator-744455d44c-8rbb8\" (UID: \"0ae17602-ebe6-41e2-9241-c0552e6a4e7e\") " pod="openshift-dns-operator/dns-operator-744455d44c-8rbb8" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560037 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7a20c9f-7be2-422e-bb13-de026cae08f7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m4tjr\" (UID: \"e7a20c9f-7be2-422e-bb13-de026cae08f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m4tjr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560061 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8779ed51-68c4-4fc0-8e83-994215a16ba0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4nxwd\" (UID: \"8779ed51-68c4-4fc0-8e83-994215a16ba0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4nxwd" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560123 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7a20c9f-7be2-422e-bb13-de026cae08f7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m4tjr\" (UID: \"e7a20c9f-7be2-422e-bb13-de026cae08f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m4tjr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560157 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d065b0a1-56ea-4bb1-aef7-9d9f46a46a42-certs\") pod \"machine-config-server-fthnr\" (UID: \"d065b0a1-56ea-4bb1-aef7-9d9f46a46a42\") " pod="openshift-machine-config-operator/machine-config-server-fthnr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560184 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a39ee5b4-a248-4d15-95c3-bf801fe2c6de-proxy-tls\") pod \"machine-config-controller-84d6567774-lv95l\" (UID: \"a39ee5b4-a248-4d15-95c3-bf801fe2c6de\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lv95l" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560213 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvm5q\" (UniqueName: \"kubernetes.io/projected/a39ee5b4-a248-4d15-95c3-bf801fe2c6de-kube-api-access-bvm5q\") pod \"machine-config-controller-84d6567774-lv95l\" (UID: \"a39ee5b4-a248-4d15-95c3-bf801fe2c6de\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lv95l" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560284 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3852642-f948-4814-8fbd-04301eb7b9c1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jxk8b\" (UID: \"e3852642-f948-4814-8fbd-04301eb7b9c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jxk8b" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560311 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/986bb24c-5a0b-4fe9-bd99-e48c3477cc45-webhook-cert\") pod \"packageserver-d55dfcdfc-6v8zp\" (UID: \"986bb24c-5a0b-4fe9-bd99-e48c3477cc45\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560339 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63e96b44-624a-4d42-b63e-22506f5bd250-config\") pod \"kube-apiserver-operator-766d6c64bb-tcchp\" (UID: \"63e96b44-624a-4d42-b63e-22506f5bd250\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tcchp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560360 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/00f06e07-b382-4409-a7af-cd84abf48e99-images\") pod \"machine-config-operator-74547568cd-pnznr\" (UID: \"00f06e07-b382-4409-a7af-cd84abf48e99\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pnznr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560386 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18408396-529f-4df8-8c25-4c483ea6d203-config\") pod \"kube-controller-manager-operator-78b949d7b-8df2c\" (UID: \"18408396-529f-4df8-8c25-4c483ea6d203\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8df2c" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560414 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv8gd\" (UniqueName: \"kubernetes.io/projected/7f83f0b7-935c-4972-83c7-bd9a2d95afc1-kube-api-access-sv8gd\") pod \"cluster-image-registry-operator-dc59b4c8b-4k6sf\" (UID: \"7f83f0b7-935c-4972-83c7-bd9a2d95afc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4k6sf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560441 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8779ed51-68c4-4fc0-8e83-994215a16ba0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4nxwd\" (UID: \"8779ed51-68c4-4fc0-8e83-994215a16ba0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4nxwd" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560468 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6btb\" (UniqueName: \"kubernetes.io/projected/05a3a12e-c6b1-4ff3-9926-c9bb85192b03-kube-api-access-q6btb\") pod \"multus-admission-controller-857f4d67dd-zgkhs\" (UID: \"05a3a12e-c6b1-4ff3-9926-c9bb85192b03\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zgkhs" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560495 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxkph\" (UniqueName: \"kubernetes.io/projected/986bb24c-5a0b-4fe9-bd99-e48c3477cc45-kube-api-access-wxkph\") pod \"packageserver-d55dfcdfc-6v8zp\" (UID: \"986bb24c-5a0b-4fe9-bd99-e48c3477cc45\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560529 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9-serving-cert\") pod \"console-operator-58897d9998-8q7hf\" (UID: \"6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9\") " pod="openshift-console-operator/console-operator-58897d9998-8q7hf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560551 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/00f06e07-b382-4409-a7af-cd84abf48e99-proxy-tls\") pod \"machine-config-operator-74547568cd-pnznr\" (UID: \"00f06e07-b382-4409-a7af-cd84abf48e99\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pnznr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560575 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnmjl\" (UniqueName: \"kubernetes.io/projected/db7abdfa-44a0-4c7b-b314-bec98e87552d-kube-api-access-tnmjl\") pod \"dns-default-k28wm\" (UID: \"db7abdfa-44a0-4c7b-b314-bec98e87552d\") " pod="openshift-dns/dns-default-k28wm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560616 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad671622-917b-4e62-a887-d2d6e0935f2e-secret-volume\") pod \"collect-profiles-29521020-9sztf\" (UID: \"ad671622-917b-4e62-a887-d2d6e0935f2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560641 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08d93167-bc2f-4032-9840-f5eda9916ddd-srv-cert\") pod \"catalog-operator-68c6474976-jxjrh\" (UID: \"08d93167-bc2f-4032-9840-f5eda9916ddd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560669 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca7e9d14-d778-46fa-bbd4-326a1cf28a38-cert\") pod \"ingress-canary-g78th\" (UID: \"ca7e9d14-d778-46fa-bbd4-326a1cf28a38\") " pod="openshift-ingress-canary/ingress-canary-g78th" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560694 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t7w5\" (UniqueName: \"kubernetes.io/projected/ad671622-917b-4e62-a887-d2d6e0935f2e-kube-api-access-8t7w5\") pod \"collect-profiles-29521020-9sztf\" (UID: \"ad671622-917b-4e62-a887-d2d6e0935f2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560721 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d353c23-d497-4fb3-8672-88f6cb2734d4-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jfvff\" (UID: \"1d353c23-d497-4fb3-8672-88f6cb2734d4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jfvff" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560768 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsf24\" (UniqueName: \"kubernetes.io/projected/00f06e07-b382-4409-a7af-cd84abf48e99-kube-api-access-nsf24\") pod \"machine-config-operator-74547568cd-pnznr\" (UID: \"00f06e07-b382-4409-a7af-cd84abf48e99\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pnznr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560798 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d353c23-d497-4fb3-8672-88f6cb2734d4-trusted-ca\") pod \"ingress-operator-5b745b69d9-jfvff\" (UID: \"1d353c23-d497-4fb3-8672-88f6cb2734d4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jfvff" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560825 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d065b0a1-56ea-4bb1-aef7-9d9f46a46a42-node-bootstrap-token\") pod \"machine-config-server-fthnr\" (UID: \"d065b0a1-56ea-4bb1-aef7-9d9f46a46a42\") " pod="openshift-machine-config-operator/machine-config-server-fthnr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560852 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/986bb24c-5a0b-4fe9-bd99-e48c3477cc45-apiservice-cert\") pod \"packageserver-d55dfcdfc-6v8zp\" (UID: \"986bb24c-5a0b-4fe9-bd99-e48c3477cc45\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560877 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-q2xvl\" (UID: \"d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560901 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn6dq\" (UniqueName: \"kubernetes.io/projected/e3852642-f948-4814-8fbd-04301eb7b9c1-kube-api-access-pn6dq\") pod \"kube-storage-version-migrator-operator-b67b599dd-jxk8b\" (UID: \"e3852642-f948-4814-8fbd-04301eb7b9c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jxk8b" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560926 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb6qr\" (UniqueName: \"kubernetes.io/projected/ca7e9d14-d778-46fa-bbd4-326a1cf28a38-kube-api-access-kb6qr\") pod \"ingress-canary-g78th\" (UID: \"ca7e9d14-d778-46fa-bbd4-326a1cf28a38\") " pod="openshift-ingress-canary/ingress-canary-g78th" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560974 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/73e4dd57-ca25-46e2-9afb-a67a8d339e67-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-v25cw\" (UID: \"73e4dd57-ca25-46e2-9afb-a67a8d339e67\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v25cw" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.560999 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0ae17602-ebe6-41e2-9241-c0552e6a4e7e-metrics-tls\") pod \"dns-operator-744455d44c-8rbb8\" (UID: \"0ae17602-ebe6-41e2-9241-c0552e6a4e7e\") " pod="openshift-dns-operator/dns-operator-744455d44c-8rbb8" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.561025 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmxqn\" (UniqueName: \"kubernetes.io/projected/8779ed51-68c4-4fc0-8e83-994215a16ba0-kube-api-access-lmxqn\") pod \"openshift-controller-manager-operator-756b6f6bc6-4nxwd\" (UID: \"8779ed51-68c4-4fc0-8e83-994215a16ba0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4nxwd" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.561066 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.561101 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/05a3a12e-c6b1-4ff3-9926-c9bb85192b03-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zgkhs\" (UID: \"05a3a12e-c6b1-4ff3-9926-c9bb85192b03\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zgkhs" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.561130 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9jrr\" (UniqueName: \"kubernetes.io/projected/d065b0a1-56ea-4bb1-aef7-9d9f46a46a42-kube-api-access-r9jrr\") pod \"machine-config-server-fthnr\" (UID: \"d065b0a1-56ea-4bb1-aef7-9d9f46a46a42\") " pod="openshift-machine-config-operator/machine-config-server-fthnr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.561152 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/46ad0255-25ad-4b1b-9a88-3a2c8f3eb302-plugins-dir\") pod \"csi-hostpathplugin-8sggp\" (UID: \"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302\") " pod="hostpath-provisioner/csi-hostpathplugin-8sggp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.561182 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8xf9\" (UniqueName: \"kubernetes.io/projected/fa1008c7-de78-4cc4-93d1-b6b22198a05a-kube-api-access-w8xf9\") pod \"control-plane-machine-set-operator-78cbb6b69f-bn28c\" (UID: \"fa1008c7-de78-4cc4-93d1-b6b22198a05a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bn28c" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.561210 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00f06e07-b382-4409-a7af-cd84abf48e99-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pnznr\" (UID: \"00f06e07-b382-4409-a7af-cd84abf48e99\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pnznr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.561234 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/46ad0255-25ad-4b1b-9a88-3a2c8f3eb302-csi-data-dir\") pod \"csi-hostpathplugin-8sggp\" (UID: \"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302\") " pod="hostpath-provisioner/csi-hostpathplugin-8sggp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.561271 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fj5k\" (UniqueName: \"kubernetes.io/projected/b0166ac5-5759-4298-a49b-6a67d179944e-kube-api-access-7fj5k\") pod \"service-ca-9c57cc56f-9d2ls\" (UID: \"b0166ac5-5759-4298-a49b-6a67d179944e\") " pod="openshift-service-ca/service-ca-9c57cc56f-9d2ls" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.561297 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjdph\" (UniqueName: \"kubernetes.io/projected/4634141f-b890-48f3-b6c7-d8a730ff29b5-kube-api-access-xjdph\") pod \"service-ca-operator-777779d784-f4pwk\" (UID: \"4634141f-b890-48f3-b6c7-d8a730ff29b5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f4pwk" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.561323 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3852642-f948-4814-8fbd-04301eb7b9c1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jxk8b\" (UID: \"e3852642-f948-4814-8fbd-04301eb7b9c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jxk8b" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.561352 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18408396-529f-4df8-8c25-4c483ea6d203-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8df2c\" (UID: \"18408396-529f-4df8-8c25-4c483ea6d203\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8df2c" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.561550 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/08d93167-bc2f-4032-9840-f5eda9916ddd-profile-collector-cert\") pod \"catalog-operator-68c6474976-jxjrh\" (UID: \"08d93167-bc2f-4032-9840-f5eda9916ddd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.562079 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18408396-529f-4df8-8c25-4c483ea6d203-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8df2c\" (UID: \"18408396-529f-4df8-8c25-4c483ea6d203\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8df2c" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.559537 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9-config\") pod \"console-operator-58897d9998-8q7hf\" (UID: \"6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9\") " pod="openshift-console-operator/console-operator-58897d9998-8q7hf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.562334 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/46ad0255-25ad-4b1b-9a88-3a2c8f3eb302-mountpoint-dir\") pod \"csi-hostpathplugin-8sggp\" (UID: \"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302\") " pod="hostpath-provisioner/csi-hostpathplugin-8sggp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.562789 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a39ee5b4-a248-4d15-95c3-bf801fe2c6de-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lv95l\" (UID: \"a39ee5b4-a248-4d15-95c3-bf801fe2c6de\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lv95l" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.562845 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d9ed0cdf-88f2-42cd-93e9-22517410ca31-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4jpbt\" (UID: \"d9ed0cdf-88f2-42cd-93e9-22517410ca31\") " pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.563217 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad671622-917b-4e62-a887-d2d6e0935f2e-config-volume\") pod \"collect-profiles-29521020-9sztf\" (UID: \"ad671622-917b-4e62-a887-d2d6e0935f2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.564252 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f83f0b7-935c-4972-83c7-bd9a2d95afc1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4k6sf\" (UID: \"7f83f0b7-935c-4972-83c7-bd9a2d95afc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4k6sf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.564359 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/46ad0255-25ad-4b1b-9a88-3a2c8f3eb302-plugins-dir\") pod \"csi-hostpathplugin-8sggp\" (UID: \"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302\") " pod="hostpath-provisioner/csi-hostpathplugin-8sggp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.564581 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d353c23-d497-4fb3-8672-88f6cb2734d4-metrics-tls\") pod \"ingress-operator-5b745b69d9-jfvff\" (UID: \"1d353c23-d497-4fb3-8672-88f6cb2734d4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jfvff" Feb 16 17:02:17 crc kubenswrapper[4870]: E0216 17:02:17.565218 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:18.065195178 +0000 UTC m=+142.548659762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.566815 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18408396-529f-4df8-8c25-4c483ea6d203-config\") pod \"kube-controller-manager-operator-78b949d7b-8df2c\" (UID: \"18408396-529f-4df8-8c25-4c483ea6d203\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8df2c" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.568965 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/00f06e07-b382-4409-a7af-cd84abf48e99-proxy-tls\") pod \"machine-config-operator-74547568cd-pnznr\" (UID: \"00f06e07-b382-4409-a7af-cd84abf48e99\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pnznr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.569475 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa1008c7-de78-4cc4-93d1-b6b22198a05a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bn28c\" (UID: \"fa1008c7-de78-4cc4-93d1-b6b22198a05a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bn28c" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.570097 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f83f0b7-935c-4972-83c7-bd9a2d95afc1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4k6sf\" (UID: \"7f83f0b7-935c-4972-83c7-bd9a2d95afc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4k6sf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.570139 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad671622-917b-4e62-a887-d2d6e0935f2e-secret-volume\") pod \"collect-profiles-29521020-9sztf\" (UID: \"ad671622-917b-4e62-a887-d2d6e0935f2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.570297 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0ae17602-ebe6-41e2-9241-c0552e6a4e7e-metrics-tls\") pod \"dns-operator-744455d44c-8rbb8\" (UID: \"0ae17602-ebe6-41e2-9241-c0552e6a4e7e\") " pod="openshift-dns-operator/dns-operator-744455d44c-8rbb8" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.570814 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9-serving-cert\") pod \"console-operator-58897d9998-8q7hf\" (UID: \"6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9\") " pod="openshift-console-operator/console-operator-58897d9998-8q7hf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.572114 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/08d93167-bc2f-4032-9840-f5eda9916ddd-srv-cert\") pod \"catalog-operator-68c6474976-jxjrh\" (UID: \"08d93167-bc2f-4032-9840-f5eda9916ddd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.573365 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/46ad0255-25ad-4b1b-9a88-3a2c8f3eb302-csi-data-dir\") pod \"csi-hostpathplugin-8sggp\" (UID: \"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302\") " pod="hostpath-provisioner/csi-hostpathplugin-8sggp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.573635 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d353c23-d497-4fb3-8672-88f6cb2734d4-trusted-ca\") pod \"ingress-operator-5b745b69d9-jfvff\" (UID: \"1d353c23-d497-4fb3-8672-88f6cb2734d4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jfvff" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.574054 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-q2xvl\" (UID: \"d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.574147 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4634141f-b890-48f3-b6c7-d8a730ff29b5-serving-cert\") pod \"service-ca-operator-777779d784-f4pwk\" (UID: \"4634141f-b890-48f3-b6c7-d8a730ff29b5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f4pwk" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.574353 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.574809 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3852642-f948-4814-8fbd-04301eb7b9c1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jxk8b\" (UID: \"e3852642-f948-4814-8fbd-04301eb7b9c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jxk8b" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.574923 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/73e4dd57-ca25-46e2-9afb-a67a8d339e67-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-v25cw\" (UID: \"73e4dd57-ca25-46e2-9afb-a67a8d339e67\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v25cw" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.576621 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8779ed51-68c4-4fc0-8e83-994215a16ba0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4nxwd\" (UID: \"8779ed51-68c4-4fc0-8e83-994215a16ba0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4nxwd" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.577183 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9ed0cdf-88f2-42cd-93e9-22517410ca31-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4jpbt\" (UID: \"d9ed0cdf-88f2-42cd-93e9-22517410ca31\") " pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.577193 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9-trusted-ca\") pod \"console-operator-58897d9998-8q7hf\" (UID: \"6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9\") " pod="openshift-console-operator/console-operator-58897d9998-8q7hf" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.577755 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4634141f-b890-48f3-b6c7-d8a730ff29b5-config\") pod \"service-ca-operator-777779d784-f4pwk\" (UID: \"4634141f-b890-48f3-b6c7-d8a730ff29b5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f4pwk" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.577762 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/986bb24c-5a0b-4fe9-bd99-e48c3477cc45-apiservice-cert\") pod \"packageserver-d55dfcdfc-6v8zp\" (UID: \"986bb24c-5a0b-4fe9-bd99-e48c3477cc45\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.578007 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/986bb24c-5a0b-4fe9-bd99-e48c3477cc45-webhook-cert\") pod \"packageserver-d55dfcdfc-6v8zp\" (UID: \"986bb24c-5a0b-4fe9-bd99-e48c3477cc45\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.578241 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/986bb24c-5a0b-4fe9-bd99-e48c3477cc45-tmpfs\") pod \"packageserver-d55dfcdfc-6v8zp\" (UID: \"986bb24c-5a0b-4fe9-bd99-e48c3477cc45\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.578957 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63e96b44-624a-4d42-b63e-22506f5bd250-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-tcchp\" (UID: \"63e96b44-624a-4d42-b63e-22506f5bd250\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tcchp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.579250 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00f06e07-b382-4409-a7af-cd84abf48e99-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pnznr\" (UID: \"00f06e07-b382-4409-a7af-cd84abf48e99\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pnznr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.579616 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3852642-f948-4814-8fbd-04301eb7b9c1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jxk8b\" (UID: \"e3852642-f948-4814-8fbd-04301eb7b9c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jxk8b" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.579693 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63e96b44-624a-4d42-b63e-22506f5bd250-config\") pod \"kube-apiserver-operator-766d6c64bb-tcchp\" (UID: \"63e96b44-624a-4d42-b63e-22506f5bd250\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tcchp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.579883 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4-serving-cert\") pod \"openshift-config-operator-7777fb866f-q2xvl\" (UID: \"d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.580714 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a39ee5b4-a248-4d15-95c3-bf801fe2c6de-proxy-tls\") pod \"machine-config-controller-84d6567774-lv95l\" (UID: \"a39ee5b4-a248-4d15-95c3-bf801fe2c6de\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lv95l" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.581850 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/00f06e07-b382-4409-a7af-cd84abf48e99-images\") pod \"machine-config-operator-74547568cd-pnznr\" (UID: \"00f06e07-b382-4409-a7af-cd84abf48e99\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pnznr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.582505 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/05a3a12e-c6b1-4ff3-9926-c9bb85192b03-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zgkhs\" (UID: \"05a3a12e-c6b1-4ff3-9926-c9bb85192b03\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zgkhs" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.582784 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.584442 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8779ed51-68c4-4fc0-8e83-994215a16ba0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4nxwd\" (UID: \"8779ed51-68c4-4fc0-8e83-994215a16ba0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4nxwd" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.585223 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b0166ac5-5759-4298-a49b-6a67d179944e-signing-key\") pod \"service-ca-9c57cc56f-9d2ls\" (UID: \"b0166ac5-5759-4298-a49b-6a67d179944e\") " pod="openshift-service-ca/service-ca-9c57cc56f-9d2ls" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.602227 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.622807 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.642069 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.662788 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.663824 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:17 crc kubenswrapper[4870]: E0216 17:02:17.664030 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:18.1640071 +0000 UTC m=+142.647471484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.664277 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: E0216 17:02:17.664857 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:18.164846274 +0000 UTC m=+142.648310658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.684300 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.688044 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7a20c9f-7be2-422e-bb13-de026cae08f7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m4tjr\" (UID: \"e7a20c9f-7be2-422e-bb13-de026cae08f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m4tjr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.712202 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.720428 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7a20c9f-7be2-422e-bb13-de026cae08f7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m4tjr\" (UID: \"e7a20c9f-7be2-422e-bb13-de026cae08f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m4tjr" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.723116 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.734420 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dc49196-10d0-4e90-8523-8f0d055c5800-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rpzss\" (UID: \"5dc49196-10d0-4e90-8523-8f0d055c5800\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rpzss" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.743934 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.763659 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.766245 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:17 crc kubenswrapper[4870]: E0216 17:02:17.766402 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:18.266381443 +0000 UTC m=+142.749845837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.766714 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: E0216 17:02:17.767133 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:18.267123094 +0000 UTC m=+142.750587478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.784014 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.803606 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.822230 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.843762 4870 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.863725 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.868334 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:17 crc kubenswrapper[4870]: E0216 17:02:17.868432 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:18.368404826 +0000 UTC m=+142.851869210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.868665 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: E0216 17:02:17.869355 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:18.369327212 +0000 UTC m=+142.852791656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.899464 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d2s9\" (UniqueName: \"kubernetes.io/projected/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-kube-api-access-2d2s9\") pod \"controller-manager-879f6c89f-c5snp\" (UID: \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.918526 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbc88\" (UniqueName: \"kubernetes.io/projected/9c1cae5d-9592-47f5-9c64-301163ac7b1a-kube-api-access-sbc88\") pod \"machine-approver-56656f9798-tlsvw\" (UID: \"9c1cae5d-9592-47f5-9c64-301163ac7b1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tlsvw" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.923031 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.943644 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.963877 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.969905 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:17 crc kubenswrapper[4870]: E0216 17:02:17.970138 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:18.47010223 +0000 UTC m=+142.953566614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.970852 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:17 crc kubenswrapper[4870]: E0216 17:02:17.971266 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:18.471255403 +0000 UTC m=+142.954719787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.971937 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca7e9d14-d778-46fa-bbd4-326a1cf28a38-cert\") pod \"ingress-canary-g78th\" (UID: \"ca7e9d14-d778-46fa-bbd4-326a1cf28a38\") " pod="openshift-ingress-canary/ingress-canary-g78th" Feb 16 17:02:17 crc kubenswrapper[4870]: I0216 17:02:17.983545 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.002222 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.022982 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.028406 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db7abdfa-44a0-4c7b-b314-bec98e87552d-metrics-tls\") pod \"dns-default-k28wm\" (UID: \"db7abdfa-44a0-4c7b-b314-bec98e87552d\") " pod="openshift-dns/dns-default-k28wm" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.043274 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.053650 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db7abdfa-44a0-4c7b-b314-bec98e87552d-config-volume\") pod \"dns-default-k28wm\" (UID: \"db7abdfa-44a0-4c7b-b314-bec98e87552d\") " pod="openshift-dns/dns-default-k28wm" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.062687 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.072267 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:18 crc kubenswrapper[4870]: E0216 17:02:18.072581 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:18.572531984 +0000 UTC m=+143.055996398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.073213 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:18 crc kubenswrapper[4870]: E0216 17:02:18.074025 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:18.573998646 +0000 UTC m=+143.057463070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.083634 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.092803 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.098312 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d065b0a1-56ea-4bb1-aef7-9d9f46a46a42-node-bootstrap-token\") pod \"machine-config-server-fthnr\" (UID: \"d065b0a1-56ea-4bb1-aef7-9d9f46a46a42\") " pod="openshift-machine-config-operator/machine-config-server-fthnr" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.104172 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.108798 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d065b0a1-56ea-4bb1-aef7-9d9f46a46a42-certs\") pod \"machine-config-server-fthnr\" (UID: \"d065b0a1-56ea-4bb1-aef7-9d9f46a46a42\") " pod="openshift-machine-config-operator/machine-config-server-fthnr" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.138996 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d84ht\" (UniqueName: \"kubernetes.io/projected/68b31442-b3cc-486b-8fd1-e968978c9f1c-kube-api-access-d84ht\") pod \"machine-api-operator-5694c8668f-br5s9\" (UID: \"68b31442-b3cc-486b-8fd1-e968978c9f1c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-br5s9" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.158278 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw64s\" (UniqueName: \"kubernetes.io/projected/6cb09d74-7044-4c8c-a89b-6bf4593ffb9d-kube-api-access-hw64s\") pod \"downloads-7954f5f757-fplj9\" (UID: \"6cb09d74-7044-4c8c-a89b-6bf4593ffb9d\") " pod="openshift-console/downloads-7954f5f757-fplj9" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.163321 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tlsvw" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.174843 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:18 crc kubenswrapper[4870]: E0216 17:02:18.175773 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:18.675752852 +0000 UTC m=+143.159217236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.184633 4870 request.go:700] Waited for 1.858130704s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/console/token Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.212146 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hmnw\" (UniqueName: \"kubernetes.io/projected/ed053e72-4999-4b5d-a9f3-c58b92280c8c-kube-api-access-8hmnw\") pod \"console-f9d7485db-n96b6\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.219439 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr8gp\" (UniqueName: \"kubernetes.io/projected/3d988003-748a-4bb4-ac42-c38d41a5295b-kube-api-access-fr8gp\") pod \"authentication-operator-69f744f599-jz2fc\" (UID: \"3d988003-748a-4bb4-ac42-c38d41a5295b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jz2fc" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.219927 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8svf4\" (UniqueName: \"kubernetes.io/projected/db804a3b-9f2e-4638-ae79-7ef21a87104d-kube-api-access-8svf4\") pod \"oauth-openshift-558db77b4-5ll8r\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.235745 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-br5s9" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.239968 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78w85\" (UniqueName: \"kubernetes.io/projected/559fe54f-c6f9-4466-b9c8-da6318fc8f59-kube-api-access-78w85\") pod \"apiserver-76f77b778f-cnjq9\" (UID: \"559fe54f-c6f9-4466-b9c8-da6318fc8f59\") " pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.249416 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.259846 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fplj9" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.264582 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qghtr\" (UniqueName: \"kubernetes.io/projected/e138835e-4175-41cf-983c-6940600a8d32-kube-api-access-qghtr\") pod \"etcd-operator-b45778765-j4dv8\" (UID: \"e138835e-4175-41cf-983c-6940600a8d32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.278770 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.280217 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:18 crc kubenswrapper[4870]: E0216 17:02:18.281850 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:18.7818289 +0000 UTC m=+143.265293284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.291065 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdlhm\" (UniqueName: \"kubernetes.io/projected/b0e0ea5e-92af-42e9-9f96-809c376bcc69-kube-api-access-mdlhm\") pod \"route-controller-manager-6576b87f9c-g6xvs\" (UID: \"b0e0ea5e-92af-42e9-9f96-809c376bcc69\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.292135 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.305536 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c5snp"] Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.305560 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtkck\" (UniqueName: \"kubernetes.io/projected/a30d36bf-bb06-4237-9273-eeee9188e931-kube-api-access-qtkck\") pod \"apiserver-7bbb656c7d-mh2p8\" (UID: \"a30d36bf-bb06-4237-9273-eeee9188e931\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.338126 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccx9v\" (UniqueName: \"kubernetes.io/projected/cc3d5e28-d52f-41d9-8360-faa12e014349-kube-api-access-ccx9v\") pod \"olm-operator-6b444d44fb-b5nnl\" (UID: \"cc3d5e28-d52f-41d9-8360-faa12e014349\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b5nnl" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.340426 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht6kl\" (UniqueName: \"kubernetes.io/projected/4524185d-72ed-4ff4-be99-2a01cf133dbc-kube-api-access-ht6kl\") pod \"router-default-5444994796-b66lf\" (UID: \"4524185d-72ed-4ff4-be99-2a01cf133dbc\") " pod="openshift-ingress/router-default-5444994796-b66lf" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.383552 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:18 crc kubenswrapper[4870]: E0216 17:02:18.383925 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:18.883882494 +0000 UTC m=+143.367346878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.384178 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:18 crc kubenswrapper[4870]: E0216 17:02:18.384835 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:18.884827591 +0000 UTC m=+143.368291975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.393564 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdd2g\" (UniqueName: \"kubernetes.io/projected/e6bf0f44-e205-4b3c-8360-a9578c67459f-kube-api-access-tdd2g\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.399534 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6bf0f44-e205-4b3c-8360-a9578c67459f-bound-sa-token\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.420354 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.428170 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.428718 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9kfp\" (UniqueName: \"kubernetes.io/projected/5dc49196-10d0-4e90-8523-8f0d055c5800-kube-api-access-p9kfp\") pod \"cluster-samples-operator-665b6dd947-rpzss\" (UID: \"5dc49196-10d0-4e90-8523-8f0d055c5800\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rpzss" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.440105 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jz2fc" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.447855 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jtwj\" (UniqueName: \"kubernetes.io/projected/08d93167-bc2f-4032-9840-f5eda9916ddd-kube-api-access-7jtwj\") pod \"catalog-operator-68c6474976-jxjrh\" (UID: \"08d93167-bc2f-4032-9840-f5eda9916ddd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.468825 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qrh9\" (UniqueName: \"kubernetes.io/projected/d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4-kube-api-access-2qrh9\") pod \"openshift-config-operator-7777fb866f-q2xvl\" (UID: \"d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.480218 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.485770 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rm7p\" (UniqueName: \"kubernetes.io/projected/46ad0255-25ad-4b1b-9a88-3a2c8f3eb302-kube-api-access-9rm7p\") pod \"csi-hostpathplugin-8sggp\" (UID: \"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302\") " pod="hostpath-provisioner/csi-hostpathplugin-8sggp" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.486475 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:18 crc kubenswrapper[4870]: E0216 17:02:18.487160 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:18.987133102 +0000 UTC m=+143.470597486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.509348 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f83f0b7-935c-4972-83c7-bd9a2d95afc1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4k6sf\" (UID: \"7f83f0b7-935c-4972-83c7-bd9a2d95afc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4k6sf" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.515062 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.521798 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2jwv\" (UniqueName: \"kubernetes.io/projected/d2caf342-1317-477b-bf32-eb860c7395c8-kube-api-access-w2jwv\") pod \"migrator-59844c95c7-rqjhl\" (UID: \"d2caf342-1317-477b-bf32-eb860c7395c8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rqjhl" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.524444 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-br5s9"] Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.547719 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxkph\" (UniqueName: \"kubernetes.io/projected/986bb24c-5a0b-4fe9-bd99-e48c3477cc45-kube-api-access-wxkph\") pod \"packageserver-d55dfcdfc-6v8zp\" (UID: \"986bb24c-5a0b-4fe9-bd99-e48c3477cc45\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.561370 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.570471 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frdhg\" (UniqueName: \"kubernetes.io/projected/6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9-kube-api-access-frdhg\") pod \"console-operator-58897d9998-8q7hf\" (UID: \"6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9\") " pod="openshift-console-operator/console-operator-58897d9998-8q7hf" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.585924 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18408396-529f-4df8-8c25-4c483ea6d203-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8df2c\" (UID: \"18408396-529f-4df8-8c25-4c483ea6d203\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8df2c" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.589469 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:18 crc kubenswrapper[4870]: E0216 17:02:18.589970 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.089934027 +0000 UTC m=+143.573398411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.593393 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rpzss" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.594364 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-b66lf" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.609323 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7a20c9f-7be2-422e-bb13-de026cae08f7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m4tjr\" (UID: \"e7a20c9f-7be2-422e-bb13-de026cae08f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m4tjr" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.612770 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m4tjr" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.613082 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b5nnl" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.638593 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb6qr\" (UniqueName: \"kubernetes.io/projected/ca7e9d14-d778-46fa-bbd4-326a1cf28a38-kube-api-access-kb6qr\") pod \"ingress-canary-g78th\" (UID: \"ca7e9d14-d778-46fa-bbd4-326a1cf28a38\") " pod="openshift-ingress-canary/ingress-canary-g78th" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.647354 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rqjhl" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.654351 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fplj9"] Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.671333 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8q7hf" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.672323 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8sggp" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.676751 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p48z4\" (UniqueName: \"kubernetes.io/projected/d9ed0cdf-88f2-42cd-93e9-22517410ca31-kube-api-access-p48z4\") pod \"marketplace-operator-79b997595-4jpbt\" (UID: \"d9ed0cdf-88f2-42cd-93e9-22517410ca31\") " pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.688778 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-g78th" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.691063 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.694997 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnmjl\" (UniqueName: \"kubernetes.io/projected/db7abdfa-44a0-4c7b-b314-bec98e87552d-kube-api-access-tnmjl\") pod \"dns-default-k28wm\" (UID: \"db7abdfa-44a0-4c7b-b314-bec98e87552d\") " pod="openshift-dns/dns-default-k28wm" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.700392 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8xf9\" (UniqueName: \"kubernetes.io/projected/fa1008c7-de78-4cc4-93d1-b6b22198a05a-kube-api-access-w8xf9\") pod \"control-plane-machine-set-operator-78cbb6b69f-bn28c\" (UID: \"fa1008c7-de78-4cc4-93d1-b6b22198a05a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bn28c" Feb 16 17:02:18 crc kubenswrapper[4870]: E0216 17:02:18.705474 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.205437304 +0000 UTC m=+143.688901688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.705986 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:18 crc kubenswrapper[4870]: E0216 17:02:18.706352 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.20634152 +0000 UTC m=+143.689805904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.715069 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8df2c" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.740722 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.747476 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bn28c" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.787801 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-j4dv8"] Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.789732 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmxqn\" (UniqueName: \"kubernetes.io/projected/8779ed51-68c4-4fc0-8e83-994215a16ba0-kube-api-access-lmxqn\") pod \"openshift-controller-manager-operator-756b6f6bc6-4nxwd\" (UID: \"8779ed51-68c4-4fc0-8e83-994215a16ba0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4nxwd" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.790017 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f2sl\" (UniqueName: \"kubernetes.io/projected/73e4dd57-ca25-46e2-9afb-a67a8d339e67-kube-api-access-2f2sl\") pod \"package-server-manager-789f6589d5-v25cw\" (UID: \"73e4dd57-ca25-46e2-9afb-a67a8d339e67\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v25cw" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.791881 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t7w5\" (UniqueName: \"kubernetes.io/projected/ad671622-917b-4e62-a887-d2d6e0935f2e-kube-api-access-8t7w5\") pod \"collect-profiles-29521020-9sztf\" (UID: \"ad671622-917b-4e62-a887-d2d6e0935f2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.800410 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-n96b6"] Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.802617 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d353c23-d497-4fb3-8672-88f6cb2734d4-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jfvff\" (UID: \"1d353c23-d497-4fb3-8672-88f6cb2734d4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jfvff" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.806185 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63e96b44-624a-4d42-b63e-22506f5bd250-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-tcchp\" (UID: \"63e96b44-624a-4d42-b63e-22506f5bd250\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tcchp" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.806210 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsf24\" (UniqueName: \"kubernetes.io/projected/00f06e07-b382-4409-a7af-cd84abf48e99-kube-api-access-nsf24\") pod \"machine-config-operator-74547568cd-pnznr\" (UID: \"00f06e07-b382-4409-a7af-cd84abf48e99\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pnznr" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.806727 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:18 crc kubenswrapper[4870]: E0216 17:02:18.807035 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.306998733 +0000 UTC m=+143.790463117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.807130 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:18 crc kubenswrapper[4870]: E0216 17:02:18.808244 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.308213357 +0000 UTC m=+143.791677741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.819813 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjdph\" (UniqueName: \"kubernetes.io/projected/4634141f-b890-48f3-b6c7-d8a730ff29b5-kube-api-access-xjdph\") pod \"service-ca-operator-777779d784-f4pwk\" (UID: \"4634141f-b890-48f3-b6c7-d8a730ff29b5\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f4pwk" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.822804 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8"] Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.829364 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.839424 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj8fh\" (UniqueName: \"kubernetes.io/projected/0ae17602-ebe6-41e2-9241-c0552e6a4e7e-kube-api-access-xj8fh\") pod \"dns-operator-744455d44c-8rbb8\" (UID: \"0ae17602-ebe6-41e2-9241-c0552e6a4e7e\") " pod="openshift-dns-operator/dns-operator-744455d44c-8rbb8" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.840301 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v25cw" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.867925 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7gg4\" (UniqueName: \"kubernetes.io/projected/1d353c23-d497-4fb3-8672-88f6cb2734d4-kube-api-access-m7gg4\") pod \"ingress-operator-5b745b69d9-jfvff\" (UID: \"1d353c23-d497-4fb3-8672-88f6cb2734d4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jfvff" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.873005 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.880869 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8rbb8" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.887083 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvm5q\" (UniqueName: \"kubernetes.io/projected/a39ee5b4-a248-4d15-95c3-bf801fe2c6de-kube-api-access-bvm5q\") pod \"machine-config-controller-84d6567774-lv95l\" (UID: \"a39ee5b4-a248-4d15-95c3-bf801fe2c6de\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lv95l" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.909600 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:18 crc kubenswrapper[4870]: E0216 17:02:18.909785 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.409747857 +0000 UTC m=+143.893212241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.910030 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.910980 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn6dq\" (UniqueName: \"kubernetes.io/projected/e3852642-f948-4814-8fbd-04301eb7b9c1-kube-api-access-pn6dq\") pod \"kube-storage-version-migrator-operator-b67b599dd-jxk8b\" (UID: \"e3852642-f948-4814-8fbd-04301eb7b9c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jxk8b" Feb 16 17:02:18 crc kubenswrapper[4870]: E0216 17:02:18.911151 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.411099135 +0000 UTC m=+143.894563679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.920592 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6btb\" (UniqueName: \"kubernetes.io/projected/05a3a12e-c6b1-4ff3-9926-c9bb85192b03-kube-api-access-q6btb\") pod \"multus-admission-controller-857f4d67dd-zgkhs\" (UID: \"05a3a12e-c6b1-4ff3-9926-c9bb85192b03\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zgkhs" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.926024 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5ll8r"] Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.941971 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9jrr\" (UniqueName: \"kubernetes.io/projected/d065b0a1-56ea-4bb1-aef7-9d9f46a46a42-kube-api-access-r9jrr\") pod \"machine-config-server-fthnr\" (UID: \"d065b0a1-56ea-4bb1-aef7-9d9f46a46a42\") " pod="openshift-machine-config-operator/machine-config-server-fthnr" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.951709 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lv95l" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.962565 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv8gd\" (UniqueName: \"kubernetes.io/projected/7f83f0b7-935c-4972-83c7-bd9a2d95afc1-kube-api-access-sv8gd\") pod \"cluster-image-registry-operator-dc59b4c8b-4k6sf\" (UID: \"7f83f0b7-935c-4972-83c7-bd9a2d95afc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4k6sf" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.976285 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pnznr" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.984788 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fj5k\" (UniqueName: \"kubernetes.io/projected/b0166ac5-5759-4298-a49b-6a67d179944e-kube-api-access-7fj5k\") pod \"service-ca-9c57cc56f-9d2ls\" (UID: \"b0166ac5-5759-4298-a49b-6a67d179944e\") " pod="openshift-service-ca/service-ca-9c57cc56f-9d2ls" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.987040 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-k28wm" Feb 16 17:02:18 crc kubenswrapper[4870]: I0216 17:02:18.990874 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fthnr" Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.008373 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zgkhs" Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.018596 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:19 crc kubenswrapper[4870]: E0216 17:02:19.019466 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.519439958 +0000 UTC m=+144.002904342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.031616 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4nxwd" Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.059456 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tcchp" Feb 16 17:02:19 crc kubenswrapper[4870]: W0216 17:02:19.059717 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb804a3b_9f2e_4638_ae79_7ef21a87104d.slice/crio-16805e682513b26781d5e0e95495257e705a91f30cb8c5543b43588924cc3255 WatchSource:0}: Error finding container 16805e682513b26781d5e0e95495257e705a91f30cb8c5543b43588924cc3255: Status 404 returned error can't find the container with id 16805e682513b26781d5e0e95495257e705a91f30cb8c5543b43588924cc3255 Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.064611 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fplj9" event={"ID":"6cb09d74-7044-4c8c-a89b-6bf4593ffb9d","Type":"ContainerStarted","Data":"36a9534cc07f663efedec834e97ce404029ae29aef1330450bcc84ed866a2c8a"} Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.066447 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jfvff" Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.066671 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-br5s9" event={"ID":"68b31442-b3cc-486b-8fd1-e968978c9f1c","Type":"ContainerStarted","Data":"ca22c9e6ee56600ee0c03ee2a46b7eb75b6bc5d3ae4f029a0656e651ce7859e3"} Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.072876 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" event={"ID":"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d","Type":"ContainerStarted","Data":"24789dfaa9fef62a4a78e0bdd40e3803f2a529099cd0379ffd4d31068b843da0"} Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.072930 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" event={"ID":"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d","Type":"ContainerStarted","Data":"c822ca7eed026140734ecef386b1cdc55af8b87d6cdffac1030b12e4bde4f9ae"} Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.074322 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.077721 4870 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-c5snp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.077799 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" podUID="88cce487-2f4f-4589-8f7b-f6a1ed6bed2d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.077746 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n96b6" event={"ID":"ed053e72-4999-4b5d-a9f3-c58b92280c8c","Type":"ContainerStarted","Data":"f55ea301b80af414714f0bc3f5d5574283231f779bb9a6a279a21f2f2b6860e1"} Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.081136 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tlsvw" event={"ID":"9c1cae5d-9592-47f5-9c64-301163ac7b1a","Type":"ContainerStarted","Data":"8d0ed41fc6d1aced752b9bc753ac4431ee39ff0c8bc12cec168ed2015b344d86"} Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.081186 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tlsvw" event={"ID":"9c1cae5d-9592-47f5-9c64-301163ac7b1a","Type":"ContainerStarted","Data":"9bdb187269da02ebfb27af593aed8be8f702f4040554516f8aacc3391a366cb0"} Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.084583 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-b66lf" event={"ID":"4524185d-72ed-4ff4-be99-2a01cf133dbc","Type":"ContainerStarted","Data":"d202fdf051c09bd8477a173f2241d4d046db564f9abd9711a3c084810a0729a3"} Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.086136 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" event={"ID":"a30d36bf-bb06-4237-9273-eeee9188e931","Type":"ContainerStarted","Data":"d29f2e6cccd29618d28033124b72da44eb69d6a64d63aaa4b7ca60865fbf39f6"} Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.087283 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" event={"ID":"e138835e-4175-41cf-983c-6940600a8d32","Type":"ContainerStarted","Data":"ad2326727917ff1c67452c6e86ed53176819e6450130c1b89b8510007ef3db7e"} Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.093060 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jxk8b" Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.116097 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4k6sf" Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.120518 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f4pwk" Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.121251 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:19 crc kubenswrapper[4870]: E0216 17:02:19.121638 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.621622506 +0000 UTC m=+144.105086890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.153564 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9d2ls" Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.224830 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:19 crc kubenswrapper[4870]: E0216 17:02:19.226370 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.726343026 +0000 UTC m=+144.209807410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.238838 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs"] Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.285454 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl"] Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.309071 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m4tjr"] Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.323320 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jz2fc"] Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.325930 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:19 crc kubenswrapper[4870]: E0216 17:02:19.326303 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.82628817 +0000 UTC m=+144.309752554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.349586 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cnjq9"] Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.357696 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" podStartSLOduration=123.357664682 podStartE2EDuration="2m3.357664682s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:19.344417726 +0000 UTC m=+143.827882110" watchObservedRunningTime="2026-02-16 17:02:19.357664682 +0000 UTC m=+143.841129066" Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.441362 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:19 crc kubenswrapper[4870]: E0216 17:02:19.442591 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.942560748 +0000 UTC m=+144.426025122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.494691 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8df2c"] Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.519030 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b5nnl"] Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.537085 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp"] Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.544065 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:19 crc kubenswrapper[4870]: E0216 17:02:19.544613 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.044591902 +0000 UTC m=+144.528056286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:19 crc kubenswrapper[4870]: W0216 17:02:19.592396 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18408396_529f_4df8_8c25_4c483ea6d203.slice/crio-586f1f8e44f5bea7b84bfaa9e9ea1cbcc38c29216e20016d8d012e999d18e1bd WatchSource:0}: Error finding container 586f1f8e44f5bea7b84bfaa9e9ea1cbcc38c29216e20016d8d012e999d18e1bd: Status 404 returned error can't find the container with id 586f1f8e44f5bea7b84bfaa9e9ea1cbcc38c29216e20016d8d012e999d18e1bd Feb 16 17:02:19 crc kubenswrapper[4870]: W0216 17:02:19.593864 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod559fe54f_c6f9_4466_b9c8_da6318fc8f59.slice/crio-e001811491a76e3131fcf68e520035fe816b6ab226c3537db9f1fa0fcb7305a0 WatchSource:0}: Error finding container e001811491a76e3131fcf68e520035fe816b6ab226c3537db9f1fa0fcb7305a0: Status 404 returned error can't find the container with id e001811491a76e3131fcf68e520035fe816b6ab226c3537db9f1fa0fcb7305a0 Feb 16 17:02:19 crc kubenswrapper[4870]: W0216 17:02:19.623765 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod986bb24c_5a0b_4fe9_bd99_e48c3477cc45.slice/crio-07150e5165ceff84970f4378f855259dab3f14aebf450129095d9ba7e85c04a0 WatchSource:0}: Error finding container 07150e5165ceff84970f4378f855259dab3f14aebf450129095d9ba7e85c04a0: Status 404 returned error can't find the container with id 07150e5165ceff84970f4378f855259dab3f14aebf450129095d9ba7e85c04a0 Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.650654 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:19 crc kubenswrapper[4870]: E0216 17:02:19.651189 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.151165144 +0000 UTC m=+144.634629518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.755777 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:19 crc kubenswrapper[4870]: E0216 17:02:19.756704 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.256670406 +0000 UTC m=+144.740134790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.751153 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8q7hf"] Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.774083 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rpzss"] Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.783881 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-g78th"] Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.863612 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:19 crc kubenswrapper[4870]: E0216 17:02:19.863807 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.363771734 +0000 UTC m=+144.847236118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.863897 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:19 crc kubenswrapper[4870]: E0216 17:02:19.864500 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.364480674 +0000 UTC m=+144.847945058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.966190 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:19 crc kubenswrapper[4870]: E0216 17:02:19.967493 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.467470735 +0000 UTC m=+144.950935119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:19 crc kubenswrapper[4870]: I0216 17:02:19.969552 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh"] Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.014291 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rqjhl"] Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.033189 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8sggp"] Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.080879 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:20 crc kubenswrapper[4870]: E0216 17:02:20.081847 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.581822889 +0000 UTC m=+145.065287273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.109392 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4gt6" podStartSLOduration=124.109365412 podStartE2EDuration="2m4.109365412s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:20.061887611 +0000 UTC m=+144.545351995" watchObservedRunningTime="2026-02-16 17:02:20.109365412 +0000 UTC m=+144.592829796" Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.131745 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8df2c" event={"ID":"18408396-529f-4df8-8c25-4c483ea6d203","Type":"ContainerStarted","Data":"586f1f8e44f5bea7b84bfaa9e9ea1cbcc38c29216e20016d8d012e999d18e1bd"} Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.135657 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp" event={"ID":"986bb24c-5a0b-4fe9-bd99-e48c3477cc45","Type":"ContainerStarted","Data":"07150e5165ceff84970f4378f855259dab3f14aebf450129095d9ba7e85c04a0"} Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.144340 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-b66lf" event={"ID":"4524185d-72ed-4ff4-be99-2a01cf133dbc","Type":"ContainerStarted","Data":"adc8439bdc159d5505ce8041f061911b9653e32f7683461fbd24c1d6adfef278"} Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.149905 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b5nnl" event={"ID":"cc3d5e28-d52f-41d9-8360-faa12e014349","Type":"ContainerStarted","Data":"8fe66618ffa41cad2d19b7678d2fc4ffc837dcba6782607863e23b7ad9190fac"} Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.152644 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" event={"ID":"e138835e-4175-41cf-983c-6940600a8d32","Type":"ContainerStarted","Data":"cc87d3fa08656d647044bcbf0670002596e78ae3e1dcde4ed9472a6e1890d394"} Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.157138 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-g78th" event={"ID":"ca7e9d14-d778-46fa-bbd4-326a1cf28a38","Type":"ContainerStarted","Data":"9c0c2b11aebb05739a7db81c31dde606bf3807d457acc64d0df0d8e3022b93fa"} Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.161583 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl" event={"ID":"d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4","Type":"ContainerStarted","Data":"4cb38443c7c9c2d050eff6cee84b8ca39492db07c302d13296231fded40533f5"} Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.165114 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fplj9" event={"ID":"6cb09d74-7044-4c8c-a89b-6bf4593ffb9d","Type":"ContainerStarted","Data":"960a254c60109051e21b55b262560f10ccc6e16bbe0dbbba601ecb6b242f45ac"} Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.167483 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fplj9" Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.169882 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8q7hf" event={"ID":"6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9","Type":"ContainerStarted","Data":"22fa37b2872701fe4c551a52dab204b4f964b11925691bc7f93a04e2a14a40bf"} Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.170142 4870 patch_prober.go:28] interesting pod/downloads-7954f5f757-fplj9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.170197 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fplj9" podUID="6cb09d74-7044-4c8c-a89b-6bf4593ffb9d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.176199 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m4tjr" event={"ID":"e7a20c9f-7be2-422e-bb13-de026cae08f7","Type":"ContainerStarted","Data":"dfddaf750b4c9027ccc52aefdd31ff6bc8f93b62e91ac0137149bcb2eaee8132"} Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.179063 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" event={"ID":"db804a3b-9f2e-4638-ae79-7ef21a87104d","Type":"ContainerStarted","Data":"16805e682513b26781d5e0e95495257e705a91f30cb8c5543b43588924cc3255"} Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.184764 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:20 crc kubenswrapper[4870]: E0216 17:02:20.185230 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.685210691 +0000 UTC m=+145.168675075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.191192 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" event={"ID":"559fe54f-c6f9-4466-b9c8-da6318fc8f59","Type":"ContainerStarted","Data":"e001811491a76e3131fcf68e520035fe816b6ab226c3537db9f1fa0fcb7305a0"} Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.217792 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tlsvw" event={"ID":"9c1cae5d-9592-47f5-9c64-301163ac7b1a","Type":"ContainerStarted","Data":"fe1a2e2999e68e37c23f189113654b12cee4bd29b9c9911fc26a5e9a1633e77b"} Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.221332 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-br5s9" event={"ID":"68b31442-b3cc-486b-8fd1-e968978c9f1c","Type":"ContainerStarted","Data":"02541e52ed3c0edab1f8128f4c285f39a5da79c78ef05bebf732471495c47805"} Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.236866 4870 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-c5snp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.237003 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" podUID="88cce487-2f4f-4589-8f7b-f6a1ed6bed2d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.289702 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.291566 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jz2fc" event={"ID":"3d988003-748a-4bb4-ac42-c38d41a5295b","Type":"ContainerStarted","Data":"8018f7050cf18fed3e54e1102c5516f72a66573e007a064180fc2f94e8f2d71b"} Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.291608 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n96b6" event={"ID":"ed053e72-4999-4b5d-a9f3-c58b92280c8c","Type":"ContainerStarted","Data":"67e5badcc69d4c67e29b3dc3fca67b1b769d32f8d9bd04e0c5b18896433199c9"} Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.291626 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fthnr" event={"ID":"d065b0a1-56ea-4bb1-aef7-9d9f46a46a42","Type":"ContainerStarted","Data":"59ebcaaa566500e3bfeb119bf41283f2dbd7f3a7f5c3e9ab621e991edf2d0057"} Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.291636 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" event={"ID":"b0e0ea5e-92af-42e9-9f96-809c376bcc69","Type":"ContainerStarted","Data":"3315badcfce7ccb872ce422f491ec0df7a133f83974841f9af89e5e0924ff485"} Feb 16 17:02:20 crc kubenswrapper[4870]: E0216 17:02:20.293207 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.793190433 +0000 UTC m=+145.276654817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.399776 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:20 crc kubenswrapper[4870]: E0216 17:02:20.400134 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.89991303 +0000 UTC m=+145.383377414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.400517 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:20 crc kubenswrapper[4870]: E0216 17:02:20.400901 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.900893488 +0000 UTC m=+145.384357872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.420332 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf"] Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.437441 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bn28c"] Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.458886 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8rbb8"] Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.502128 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:20 crc kubenswrapper[4870]: E0216 17:02:20.502782 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:21.002758046 +0000 UTC m=+145.486222440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.593037 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v25cw"] Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.596114 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-b66lf" Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.599456 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.599504 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.610871 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:20 crc kubenswrapper[4870]: E0216 17:02:20.611893 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:21.111873151 +0000 UTC m=+145.595337535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.713939 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:20 crc kubenswrapper[4870]: E0216 17:02:20.714128 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:21.21410535 +0000 UTC m=+145.697569734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.714570 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:20 crc kubenswrapper[4870]: E0216 17:02:20.714932 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:21.214922114 +0000 UTC m=+145.698386498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.747752 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-j4dv8" podStartSLOduration=123.747706727 podStartE2EDuration="2m3.747706727s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:20.738504885 +0000 UTC m=+145.221969289" watchObservedRunningTime="2026-02-16 17:02:20.747706727 +0000 UTC m=+145.231171111" Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.779431 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-b66lf" podStartSLOduration=123.779404739 podStartE2EDuration="2m3.779404739s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:20.776600269 +0000 UTC m=+145.260064643" watchObservedRunningTime="2026-02-16 17:02:20.779404739 +0000 UTC m=+145.262869123" Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.817846 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:20 crc kubenswrapper[4870]: E0216 17:02:20.818000 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:21.317974346 +0000 UTC m=+145.801438730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.818211 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:20 crc kubenswrapper[4870]: E0216 17:02:20.818574 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:21.318563343 +0000 UTC m=+145.802027727 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.818922 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tlsvw" podStartSLOduration=124.818909483 podStartE2EDuration="2m4.818909483s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:20.817788381 +0000 UTC m=+145.301252765" watchObservedRunningTime="2026-02-16 17:02:20.818909483 +0000 UTC m=+145.302373867" Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.851070 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-k28wm"] Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.851900 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lv95l"] Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.864444 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zgkhs"] Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.868457 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jxk8b"] Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.869443 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jfvff"] Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.871009 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4jpbt"] Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.871495 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-fplj9" podStartSLOduration=124.871473318 podStartE2EDuration="2m4.871473318s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:20.859829887 +0000 UTC m=+145.343294291" watchObservedRunningTime="2026-02-16 17:02:20.871473318 +0000 UTC m=+145.354937702" Feb 16 17:02:20 crc kubenswrapper[4870]: W0216 17:02:20.901417 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda39ee5b4_a248_4d15_95c3_bf801fe2c6de.slice/crio-1e18c7fdc243174918f2d8b9251de812952a39b3b534454990f52280168eee48 WatchSource:0}: Error finding container 1e18c7fdc243174918f2d8b9251de812952a39b3b534454990f52280168eee48: Status 404 returned error can't find the container with id 1e18c7fdc243174918f2d8b9251de812952a39b3b534454990f52280168eee48 Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.902746 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tcchp"] Feb 16 17:02:20 crc kubenswrapper[4870]: W0216 17:02:20.909819 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05a3a12e_c6b1_4ff3_9926_c9bb85192b03.slice/crio-815ac256e09ebaa43c2970e1c7ebe2418bf1469279ecf441a45623a149d469ad WatchSource:0}: Error finding container 815ac256e09ebaa43c2970e1c7ebe2418bf1469279ecf441a45623a149d469ad: Status 404 returned error can't find the container with id 815ac256e09ebaa43c2970e1c7ebe2418bf1469279ecf441a45623a149d469ad Feb 16 17:02:20 crc kubenswrapper[4870]: W0216 17:02:20.910090 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3852642_f948_4814_8fbd_04301eb7b9c1.slice/crio-aafc27f5cdffdde969ba20ea287e58c564c4e6b1b8376146a600fda31b3f468f WatchSource:0}: Error finding container aafc27f5cdffdde969ba20ea287e58c564c4e6b1b8376146a600fda31b3f468f: Status 404 returned error can't find the container with id aafc27f5cdffdde969ba20ea287e58c564c4e6b1b8376146a600fda31b3f468f Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.914102 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4nxwd"] Feb 16 17:02:20 crc kubenswrapper[4870]: W0216 17:02:20.914471 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d353c23_d497_4fb3_8672_88f6cb2734d4.slice/crio-a9890e1fdc2022d3de2deaec8db4601819b19c96696b0f3ff7382c0eccb1468e WatchSource:0}: Error finding container a9890e1fdc2022d3de2deaec8db4601819b19c96696b0f3ff7382c0eccb1468e: Status 404 returned error can't find the container with id a9890e1fdc2022d3de2deaec8db4601819b19c96696b0f3ff7382c0eccb1468e Feb 16 17:02:20 crc kubenswrapper[4870]: W0216 17:02:20.915663 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9ed0cdf_88f2_42cd_93e9_22517410ca31.slice/crio-10eefd75e0f4df7f25b977f8aeaa6df9227ffb8f80a2c0c5cf8da7ce264db4df WatchSource:0}: Error finding container 10eefd75e0f4df7f25b977f8aeaa6df9227ffb8f80a2c0c5cf8da7ce264db4df: Status 404 returned error can't find the container with id 10eefd75e0f4df7f25b977f8aeaa6df9227ffb8f80a2c0c5cf8da7ce264db4df Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.917958 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-n96b6" podStartSLOduration=124.91791263 podStartE2EDuration="2m4.91791263s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:20.91335179 +0000 UTC m=+145.396816174" watchObservedRunningTime="2026-02-16 17:02:20.91791263 +0000 UTC m=+145.401377014" Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.918722 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:20 crc kubenswrapper[4870]: E0216 17:02:20.919756 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:21.419737392 +0000 UTC m=+145.903201776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.931420 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4k6sf"] Feb 16 17:02:20 crc kubenswrapper[4870]: I0216 17:02:20.934773 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pnznr"] Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.001642 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9d2ls"] Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.017640 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f4pwk"] Feb 16 17:02:21 crc kubenswrapper[4870]: W0216 17:02:21.019907 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f83f0b7_935c_4972_83c7_bd9a2d95afc1.slice/crio-cb5a714f3536a24e538008cd7132a8c576a813a51f70d4c397b01107618af703 WatchSource:0}: Error finding container cb5a714f3536a24e538008cd7132a8c576a813a51f70d4c397b01107618af703: Status 404 returned error can't find the container with id cb5a714f3536a24e538008cd7132a8c576a813a51f70d4c397b01107618af703 Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.021448 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:21 crc kubenswrapper[4870]: W0216 17:02:21.024892 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00f06e07_b382_4409_a7af_cd84abf48e99.slice/crio-2b1469a989ee49f29d76060fed337ebcbfca081f15b93e8b2f9958b95faeec1d WatchSource:0}: Error finding container 2b1469a989ee49f29d76060fed337ebcbfca081f15b93e8b2f9958b95faeec1d: Status 404 returned error can't find the container with id 2b1469a989ee49f29d76060fed337ebcbfca081f15b93e8b2f9958b95faeec1d Feb 16 17:02:21 crc kubenswrapper[4870]: E0216 17:02:21.026638 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:21.526598433 +0000 UTC m=+146.010062817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:21 crc kubenswrapper[4870]: W0216 17:02:21.029390 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0166ac5_5759_4298_a49b_6a67d179944e.slice/crio-1e059ca7d9decbc32306f8ac30a3b9f3cddf48b9fd16504da2d62c961fc71a8e WatchSource:0}: Error finding container 1e059ca7d9decbc32306f8ac30a3b9f3cddf48b9fd16504da2d62c961fc71a8e: Status 404 returned error can't find the container with id 1e059ca7d9decbc32306f8ac30a3b9f3cddf48b9fd16504da2d62c961fc71a8e Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.123101 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:21 crc kubenswrapper[4870]: E0216 17:02:21.123286 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:21.623259213 +0000 UTC m=+146.106723597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.123368 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:21 crc kubenswrapper[4870]: E0216 17:02:21.123826 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:21.623816419 +0000 UTC m=+146.107280803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.224743 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:21 crc kubenswrapper[4870]: E0216 17:02:21.224975 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:21.724937146 +0000 UTC m=+146.208401520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.225551 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:21 crc kubenswrapper[4870]: E0216 17:02:21.226191 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:21.726159301 +0000 UTC m=+146.209623685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.240915 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf" event={"ID":"ad671622-917b-4e62-a887-d2d6e0935f2e","Type":"ContainerStarted","Data":"55a6ab5af4c26d2f5b24e2f91cef331398123d76f53880c6d570dc82cb4a6e72"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.242345 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bn28c" event={"ID":"fa1008c7-de78-4cc4-93d1-b6b22198a05a","Type":"ContainerStarted","Data":"0baaf68ed35721af005a7d2a3a70ce78aa355e815af574d6aefabb6c3470d59b"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.244787 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-br5s9" event={"ID":"68b31442-b3cc-486b-8fd1-e968978c9f1c","Type":"ContainerStarted","Data":"bc5246cc2200f416c6e769188c26bdfef929efbc317f587cfe9870414c7116ea"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.246586 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9d2ls" event={"ID":"b0166ac5-5759-4298-a49b-6a67d179944e","Type":"ContainerStarted","Data":"1e059ca7d9decbc32306f8ac30a3b9f3cddf48b9fd16504da2d62c961fc71a8e"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.248076 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-k28wm" event={"ID":"db7abdfa-44a0-4c7b-b314-bec98e87552d","Type":"ContainerStarted","Data":"ee3b4c790096851fb8505843a0d82bed130562effaf268b5509e44037370bb03"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.249410 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4nxwd" event={"ID":"8779ed51-68c4-4fc0-8e83-994215a16ba0","Type":"ContainerStarted","Data":"d9d6ac01ceb601d669f787b1541ec3adc13aeb83b0e5a64f03443b53ca1b7d9a"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.250746 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rqjhl" event={"ID":"d2caf342-1317-477b-bf32-eb860c7395c8","Type":"ContainerStarted","Data":"df0c981b48176e9bc7ae49509aa764d6232c93814e4b4644d7528884a90ceb1c"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.252168 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" event={"ID":"d9ed0cdf-88f2-42cd-93e9-22517410ca31","Type":"ContainerStarted","Data":"10eefd75e0f4df7f25b977f8aeaa6df9227ffb8f80a2c0c5cf8da7ce264db4df"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.254172 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" event={"ID":"db804a3b-9f2e-4638-ae79-7ef21a87104d","Type":"ContainerStarted","Data":"3fdde906c153ce907b2a7d137a93f696ec8cd685a2c05d263d3de9f5fc6a0ded"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.255775 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh" event={"ID":"08d93167-bc2f-4032-9840-f5eda9916ddd","Type":"ContainerStarted","Data":"75e47ded99d57e0a76a9357ef8f86ba069aee97d9fe13dd8e98e65efbb47dd93"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.256985 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4k6sf" event={"ID":"7f83f0b7-935c-4972-83c7-bd9a2d95afc1","Type":"ContainerStarted","Data":"cb5a714f3536a24e538008cd7132a8c576a813a51f70d4c397b01107618af703"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.258298 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tcchp" event={"ID":"63e96b44-624a-4d42-b63e-22506f5bd250","Type":"ContainerStarted","Data":"b40d97abafd17df224edd59b4c30a62078b4e98b6b79b4c29442ca7228971c0b"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.259746 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8rbb8" event={"ID":"0ae17602-ebe6-41e2-9241-c0552e6a4e7e","Type":"ContainerStarted","Data":"733abccd1dd9cd0702938ea6688d8cccf5a502dae21d906a9db497300023a14d"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.261370 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" event={"ID":"b0e0ea5e-92af-42e9-9f96-809c376bcc69","Type":"ContainerStarted","Data":"c8d02dad44286f91d6de18368332a5cbcc9b7eb186668d27588f59e20f9dd1bf"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.261735 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.263233 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jxk8b" event={"ID":"e3852642-f948-4814-8fbd-04301eb7b9c1","Type":"ContainerStarted","Data":"aafc27f5cdffdde969ba20ea287e58c564c4e6b1b8376146a600fda31b3f468f"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.264224 4870 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-g6xvs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.264276 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" podUID="b0e0ea5e-92af-42e9-9f96-809c376bcc69" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.264421 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pnznr" event={"ID":"00f06e07-b382-4409-a7af-cd84abf48e99","Type":"ContainerStarted","Data":"2b1469a989ee49f29d76060fed337ebcbfca081f15b93e8b2f9958b95faeec1d"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.266428 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zgkhs" event={"ID":"05a3a12e-c6b1-4ff3-9926-c9bb85192b03","Type":"ContainerStarted","Data":"815ac256e09ebaa43c2970e1c7ebe2418bf1469279ecf441a45623a149d469ad"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.269436 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8sggp" event={"ID":"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302","Type":"ContainerStarted","Data":"3ee6ec817b66f248c2052177887f0a9e052089e77b64724bcd69ca06fb5d6ab8"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.270968 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lv95l" event={"ID":"a39ee5b4-a248-4d15-95c3-bf801fe2c6de","Type":"ContainerStarted","Data":"1e18c7fdc243174918f2d8b9251de812952a39b3b534454990f52280168eee48"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.272231 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rpzss" event={"ID":"5dc49196-10d0-4e90-8523-8f0d055c5800","Type":"ContainerStarted","Data":"1c02c8377b5def7c193eeedc253bea980fcdf748689366bf993515b5af8ac421"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.273335 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v25cw" event={"ID":"73e4dd57-ca25-46e2-9afb-a67a8d339e67","Type":"ContainerStarted","Data":"3f697307d6f694a8eb1e69858eb85579167f9b437042dddec53367ea89acaa10"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.275107 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jfvff" event={"ID":"1d353c23-d497-4fb3-8672-88f6cb2734d4","Type":"ContainerStarted","Data":"a9890e1fdc2022d3de2deaec8db4601819b19c96696b0f3ff7382c0eccb1468e"} Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.275918 4870 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-c5snp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.275987 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" podUID="88cce487-2f4f-4589-8f7b-f6a1ed6bed2d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.276227 4870 patch_prober.go:28] interesting pod/downloads-7954f5f757-fplj9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.276300 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fplj9" podUID="6cb09d74-7044-4c8c-a89b-6bf4593ffb9d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.285119 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" podStartSLOduration=124.285086518 podStartE2EDuration="2m4.285086518s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:21.278891662 +0000 UTC m=+145.762356066" watchObservedRunningTime="2026-02-16 17:02:21.285086518 +0000 UTC m=+145.768550922" Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.326797 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:21 crc kubenswrapper[4870]: E0216 17:02:21.326978 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:21.826937909 +0000 UTC m=+146.310402293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.327493 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:21 crc kubenswrapper[4870]: E0216 17:02:21.330288 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:21.830270644 +0000 UTC m=+146.313735018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:21 crc kubenswrapper[4870]: W0216 17:02:21.352530 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4634141f_b890_48f3_b6c7_d8a730ff29b5.slice/crio-8ffe7ada091662a9e3f3e207d0f52cdceadd8e5d06a4ab9db79ed1ed8b33427c WatchSource:0}: Error finding container 8ffe7ada091662a9e3f3e207d0f52cdceadd8e5d06a4ab9db79ed1ed8b33427c: Status 404 returned error can't find the container with id 8ffe7ada091662a9e3f3e207d0f52cdceadd8e5d06a4ab9db79ed1ed8b33427c Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.430114 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:21 crc kubenswrapper[4870]: E0216 17:02:21.430855 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:21.930830795 +0000 UTC m=+146.414295179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.532373 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:21 crc kubenswrapper[4870]: E0216 17:02:21.532881 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:22.032862688 +0000 UTC m=+146.516327072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.597629 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.597708 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.633771 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:21 crc kubenswrapper[4870]: E0216 17:02:21.634072 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:22.134042667 +0000 UTC m=+146.617507051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.634372 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:21 crc kubenswrapper[4870]: E0216 17:02:21.634868 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:22.1348468 +0000 UTC m=+146.618311184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.736254 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:21 crc kubenswrapper[4870]: E0216 17:02:21.736913 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:22.236892204 +0000 UTC m=+146.720356588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.839524 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:21 crc kubenswrapper[4870]: E0216 17:02:21.840075 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:22.340047699 +0000 UTC m=+146.823512073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.940559 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:21 crc kubenswrapper[4870]: E0216 17:02:21.940809 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:22.440756155 +0000 UTC m=+146.924220539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:21 crc kubenswrapper[4870]: I0216 17:02:21.941434 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:21 crc kubenswrapper[4870]: E0216 17:02:21.942069 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:22.442045142 +0000 UTC m=+146.925509526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.042836 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:22 crc kubenswrapper[4870]: E0216 17:02:22.043341 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:22.543319583 +0000 UTC m=+147.026783967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.144766 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:22 crc kubenswrapper[4870]: E0216 17:02:22.145802 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:22.645786059 +0000 UTC m=+147.129250443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.246086 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:22 crc kubenswrapper[4870]: E0216 17:02:22.246633 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:22.746614318 +0000 UTC m=+147.230078702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.332185 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4nxwd" event={"ID":"8779ed51-68c4-4fc0-8e83-994215a16ba0","Type":"ContainerStarted","Data":"7eabdc1fa8d98b571019807005bccb1bdd7022ac36d7235fb176a2d7366d08ad"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.338231 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rqjhl" event={"ID":"d2caf342-1317-477b-bf32-eb860c7395c8","Type":"ContainerStarted","Data":"3bc466992e517fff22d81fc6294bcca9e8988b3fb71e9539fbb8490e68ecbf6a"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.347724 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:22 crc kubenswrapper[4870]: E0216 17:02:22.348228 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:22.848209978 +0000 UTC m=+147.331674362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.376630 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-k28wm" event={"ID":"db7abdfa-44a0-4c7b-b314-bec98e87552d","Type":"ContainerStarted","Data":"e5b7a66c449f7d73d7582eaa8b4337617d62504e93c025a8823e6da8ece83bb0"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.381976 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl" event={"ID":"d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4","Type":"ContainerStarted","Data":"d048b687052b674d8e69cdbe22c28b87018ca8b9238bc5497dd4b9488c2034dc"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.383440 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8df2c" event={"ID":"18408396-529f-4df8-8c25-4c483ea6d203","Type":"ContainerStarted","Data":"5a5fcf3d9c68f61a1c3fd2bdb519d6da8e63794079cfbff564baa5bf1629acb9"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.387475 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m4tjr" event={"ID":"e7a20c9f-7be2-422e-bb13-de026cae08f7","Type":"ContainerStarted","Data":"26fe73aba74148c24239c98be2f556928f58f824d6e42899268af9b942a8d523"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.396279 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pnznr" event={"ID":"00f06e07-b382-4409-a7af-cd84abf48e99","Type":"ContainerStarted","Data":"7f5af0394236fd4a5c474384ccac8b15c04eb2f67a9cfd257bd75e9f85b31ded"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.398650 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b5nnl" event={"ID":"cc3d5e28-d52f-41d9-8360-faa12e014349","Type":"ContainerStarted","Data":"a0ed2c35b653d51c7cb91494b4be15421c10fedcf4dd67262faab94065834780"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.398860 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b5nnl" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.400726 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf" event={"ID":"ad671622-917b-4e62-a887-d2d6e0935f2e","Type":"ContainerStarted","Data":"bd8356847a79aea985e806460936709f6752079e00ec36e216e381c5f0178d6a"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.403250 4870 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-b5nnl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.403306 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b5nnl" podUID="cc3d5e28-d52f-41d9-8360-faa12e014349" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.424041 4870 generic.go:334] "Generic (PLEG): container finished" podID="559fe54f-c6f9-4466-b9c8-da6318fc8f59" containerID="26c7932b87aeaf05a49e790b5d8590118a0dcd9ad7e376f54f4572c271c52f1e" exitCode=0 Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.424117 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" event={"ID":"559fe54f-c6f9-4466-b9c8-da6318fc8f59","Type":"ContainerDied","Data":"26c7932b87aeaf05a49e790b5d8590118a0dcd9ad7e376f54f4572c271c52f1e"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.425369 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf" podStartSLOduration=126.425345363 podStartE2EDuration="2m6.425345363s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:22.424329304 +0000 UTC m=+146.907793678" watchObservedRunningTime="2026-02-16 17:02:22.425345363 +0000 UTC m=+146.908809747" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.437157 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fthnr" event={"ID":"d065b0a1-56ea-4bb1-aef7-9d9f46a46a42","Type":"ContainerStarted","Data":"a77f7ba2844e32b5335d6c0368ef8dd338ddc8c874e80ad2cd9b359470090cf8"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.439223 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jfvff" event={"ID":"1d353c23-d497-4fb3-8672-88f6cb2734d4","Type":"ContainerStarted","Data":"a0bdcabfffd4a89fb34fe1f4bae27ad2074a53b21deda4720d23855e5200a86e"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.440974 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9d2ls" event={"ID":"b0166ac5-5759-4298-a49b-6a67d179944e","Type":"ContainerStarted","Data":"ff65bf89eff1acbbb5f6913cbaebf9a4e10166428933406e70fa83f831d40b66"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.448677 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:22 crc kubenswrapper[4870]: E0216 17:02:22.449888 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:22.949872311 +0000 UTC m=+147.433336695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.458357 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8rbb8" event={"ID":"0ae17602-ebe6-41e2-9241-c0552e6a4e7e","Type":"ContainerStarted","Data":"bbbcd3772ab194892c66eb5fad7ba3f69577758916920e4b2d2d1deecf10fdc9"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.469548 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8df2c" podStartSLOduration=125.46951925 podStartE2EDuration="2m5.46951925s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:22.444251531 +0000 UTC m=+146.927715945" watchObservedRunningTime="2026-02-16 17:02:22.46951925 +0000 UTC m=+146.952983634" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.470930 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b5nnl" podStartSLOduration=125.470916259 podStartE2EDuration="2m5.470916259s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:22.459607338 +0000 UTC m=+146.943071752" watchObservedRunningTime="2026-02-16 17:02:22.470916259 +0000 UTC m=+146.954380653" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.495842 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m4tjr" podStartSLOduration=125.495809418 podStartE2EDuration="2m5.495809418s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:22.49551679 +0000 UTC m=+146.978981174" watchObservedRunningTime="2026-02-16 17:02:22.495809418 +0000 UTC m=+146.979273802" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.498663 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bn28c" event={"ID":"fa1008c7-de78-4cc4-93d1-b6b22198a05a","Type":"ContainerStarted","Data":"e2c81392b7128460e2966d255808e35053ca8bf942b1afcc482e02945b70da96"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.504644 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jz2fc" event={"ID":"3d988003-748a-4bb4-ac42-c38d41a5295b","Type":"ContainerStarted","Data":"b4054f8128977600fec382c1ae890194d43f1e93b0d126af00d7c0d910dae7b3"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.520017 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v25cw" event={"ID":"73e4dd57-ca25-46e2-9afb-a67a8d339e67","Type":"ContainerStarted","Data":"03471f17f6d1c15ed112318fcd41ca596b8cb18748827e6e5fa88ef6dfae49d9"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.525273 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-fthnr" podStartSLOduration=6.525253856 podStartE2EDuration="6.525253856s" podCreationTimestamp="2026-02-16 17:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:22.523399673 +0000 UTC m=+147.006864057" watchObservedRunningTime="2026-02-16 17:02:22.525253856 +0000 UTC m=+147.008718240" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.545418 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f4pwk" event={"ID":"4634141f-b890-48f3-b6c7-d8a730ff29b5","Type":"ContainerStarted","Data":"8ffe7ada091662a9e3f3e207d0f52cdceadd8e5d06a4ab9db79ed1ed8b33427c"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.549728 4870 generic.go:334] "Generic (PLEG): container finished" podID="a30d36bf-bb06-4237-9273-eeee9188e931" containerID="89bd2319c6b5f0a2f3329269f77f58d93c31a7d42bbb7e370ffb2ce55f57b3cc" exitCode=0 Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.549806 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" event={"ID":"a30d36bf-bb06-4237-9273-eeee9188e931","Type":"ContainerDied","Data":"89bd2319c6b5f0a2f3329269f77f58d93c31a7d42bbb7e370ffb2ce55f57b3cc"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.552711 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.558794 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-jz2fc" podStartSLOduration=126.55877312 podStartE2EDuration="2m6.55877312s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:22.556843175 +0000 UTC m=+147.040307569" watchObservedRunningTime="2026-02-16 17:02:22.55877312 +0000 UTC m=+147.042237504" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.563919 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp" event={"ID":"986bb24c-5a0b-4fe9-bd99-e48c3477cc45","Type":"ContainerStarted","Data":"06f07b84006fa37a77f2696014af0776241cbd39c6f528a0895df4147f6f3640"} Feb 16 17:02:22 crc kubenswrapper[4870]: E0216 17:02:22.565043 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:23.065020897 +0000 UTC m=+147.548485291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.565376 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.578403 4870 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6v8zp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.580775 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp" podUID="986bb24c-5a0b-4fe9-bd99-e48c3477cc45" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.583134 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8q7hf" event={"ID":"6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9","Type":"ContainerStarted","Data":"0df02a0e278be53b5fc770a4e528205c9d51eda83c96d53069043d865c8265f5"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.583203 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8q7hf" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.587281 4870 patch_prober.go:28] interesting pod/console-operator-58897d9998-8q7hf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.587338 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8q7hf" podUID="6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.589038 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bn28c" podStartSLOduration=125.5890176 podStartE2EDuration="2m5.5890176s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:22.586016085 +0000 UTC m=+147.069480479" watchObservedRunningTime="2026-02-16 17:02:22.5890176 +0000 UTC m=+147.072481984" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.602473 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:02:22 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Feb 16 17:02:22 crc kubenswrapper[4870]: [+]process-running ok Feb 16 17:02:22 crc kubenswrapper[4870]: healthz check failed Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.602530 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.623910 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f4pwk" podStartSLOduration=125.623883262 podStartE2EDuration="2m5.623883262s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:22.612037325 +0000 UTC m=+147.095501709" watchObservedRunningTime="2026-02-16 17:02:22.623883262 +0000 UTC m=+147.107347646" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.625750 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh" event={"ID":"08d93167-bc2f-4032-9840-f5eda9916ddd","Type":"ContainerStarted","Data":"eb0047339f951034ec890b59e199151e40dae2a61018cbbd0c2d7c455c6384d7"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.626979 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.633498 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-g78th" event={"ID":"ca7e9d14-d778-46fa-bbd4-326a1cf28a38","Type":"ContainerStarted","Data":"79c9190f28c0b8594827e2919ebf2cf58e481f1ae30009d1fa4e7a2c3627d3bc"} Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.633684 4870 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-g6xvs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.633746 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" podUID="b0e0ea5e-92af-42e9-9f96-809c376bcc69" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.634585 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.635699 4870 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jxjrh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.635780 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh" podUID="08d93167-bc2f-4032-9840-f5eda9916ddd" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.635780 4870 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5ll8r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.635840 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" podUID="db804a3b-9f2e-4638-ae79-7ef21a87104d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.640621 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp" podStartSLOduration=125.640592638 podStartE2EDuration="2m5.640592638s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:22.637049917 +0000 UTC m=+147.120514311" watchObservedRunningTime="2026-02-16 17:02:22.640592638 +0000 UTC m=+147.124057012" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.654535 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:22 crc kubenswrapper[4870]: E0216 17:02:22.657963 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:23.157917591 +0000 UTC m=+147.641381975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.712444 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-8q7hf" podStartSLOduration=126.712416781 podStartE2EDuration="2m6.712416781s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:22.711543687 +0000 UTC m=+147.195008081" watchObservedRunningTime="2026-02-16 17:02:22.712416781 +0000 UTC m=+147.195881165" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.770847 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" podStartSLOduration=126.770817363 podStartE2EDuration="2m6.770817363s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:22.750215147 +0000 UTC m=+147.233679531" watchObservedRunningTime="2026-02-16 17:02:22.770817363 +0000 UTC m=+147.254281747" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.783422 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:22 crc kubenswrapper[4870]: E0216 17:02:22.787100 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:23.287078316 +0000 UTC m=+147.770542790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.800299 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh" podStartSLOduration=125.800274971 podStartE2EDuration="2m5.800274971s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:22.797518473 +0000 UTC m=+147.280982857" watchObservedRunningTime="2026-02-16 17:02:22.800274971 +0000 UTC m=+147.283739355" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.865436 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-br5s9" podStartSLOduration=125.865368384 podStartE2EDuration="2m5.865368384s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:22.853587158 +0000 UTC m=+147.337051552" watchObservedRunningTime="2026-02-16 17:02:22.865368384 +0000 UTC m=+147.348832768" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.865895 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-g78th" podStartSLOduration=6.865887498 podStartE2EDuration="6.865887498s" podCreationTimestamp="2026-02-16 17:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:22.82235665 +0000 UTC m=+147.305821034" watchObservedRunningTime="2026-02-16 17:02:22.865887498 +0000 UTC m=+147.349351882" Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.888219 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:22 crc kubenswrapper[4870]: E0216 17:02:22.888441 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:23.388395439 +0000 UTC m=+147.871859823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.888671 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:22 crc kubenswrapper[4870]: E0216 17:02:22.889154 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:23.38913673 +0000 UTC m=+147.872601114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:22 crc kubenswrapper[4870]: I0216 17:02:22.993071 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:22 crc kubenswrapper[4870]: E0216 17:02:22.993535 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:23.49351218 +0000 UTC m=+147.976976564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.095418 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:23 crc kubenswrapper[4870]: E0216 17:02:23.096315 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:23.596299195 +0000 UTC m=+148.079763579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.197176 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:23 crc kubenswrapper[4870]: E0216 17:02:23.197667 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:23.697649369 +0000 UTC m=+148.181113753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.300047 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:23 crc kubenswrapper[4870]: E0216 17:02:23.300484 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:23.800465914 +0000 UTC m=+148.283930298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.324396 4870 csr.go:261] certificate signing request csr-q9p2m is approved, waiting to be issued Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.335218 4870 csr.go:257] certificate signing request csr-q9p2m is issued Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.402051 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:23 crc kubenswrapper[4870]: E0216 17:02:23.402338 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:23.902304552 +0000 UTC m=+148.385768936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.402456 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:23 crc kubenswrapper[4870]: E0216 17:02:23.402873 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:23.902860018 +0000 UTC m=+148.386324402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.507615 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:23 crc kubenswrapper[4870]: E0216 17:02:23.507837 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.007802644 +0000 UTC m=+148.491267028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.508115 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:23 crc kubenswrapper[4870]: E0216 17:02:23.508507 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.008489383 +0000 UTC m=+148.491953767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.601632 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:02:23 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Feb 16 17:02:23 crc kubenswrapper[4870]: [+]process-running ok Feb 16 17:02:23 crc kubenswrapper[4870]: healthz check failed Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.602193 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.609337 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:23 crc kubenswrapper[4870]: E0216 17:02:23.609786 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.109747315 +0000 UTC m=+148.593211699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.642178 4870 generic.go:334] "Generic (PLEG): container finished" podID="d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4" containerID="d048b687052b674d8e69cdbe22c28b87018ca8b9238bc5497dd4b9488c2034dc" exitCode=0 Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.643043 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl" event={"ID":"d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4","Type":"ContainerDied","Data":"d048b687052b674d8e69cdbe22c28b87018ca8b9238bc5497dd4b9488c2034dc"} Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.647500 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lv95l" event={"ID":"a39ee5b4-a248-4d15-95c3-bf801fe2c6de","Type":"ContainerStarted","Data":"95b0fb0bcf53f0987db20c6784bda9e2f378eafcd8415ea809ae69d5147e883e"} Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.647552 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lv95l" event={"ID":"a39ee5b4-a248-4d15-95c3-bf801fe2c6de","Type":"ContainerStarted","Data":"51cb04f1c26212396b5412a5c38ccc9694e1c59112fd2ce2a05f3ad3f61f19b1"} Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.649853 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-k28wm" event={"ID":"db7abdfa-44a0-4c7b-b314-bec98e87552d","Type":"ContainerStarted","Data":"e9b345f56aab17f93180eded1cd8fc0966f7ee771a5a27b1416ef58a36846a03"} Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.650008 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-k28wm" Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.652114 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f4pwk" event={"ID":"4634141f-b890-48f3-b6c7-d8a730ff29b5","Type":"ContainerStarted","Data":"28495ca06bc1e0ee310a4f0aa5b335da8f1cc60d2bdde139f3225019978271c1"} Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.654190 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jfvff" event={"ID":"1d353c23-d497-4fb3-8672-88f6cb2734d4","Type":"ContainerStarted","Data":"f2fa36ac15154e5a924851865d9bec00832104b451dad3b1ada2e14e887a82fd"} Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.664684 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pnznr" event={"ID":"00f06e07-b382-4409-a7af-cd84abf48e99","Type":"ContainerStarted","Data":"54087eebb376a350d3ae881bc1c69ea99f982d7be5c04e95f1f74a84b442c902"} Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.683257 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zgkhs" event={"ID":"05a3a12e-c6b1-4ff3-9926-c9bb85192b03","Type":"ContainerStarted","Data":"fb23260b814106b630620060486cd0f2fd88c8ada13e36487f9b2ecab472d97d"} Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.683323 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zgkhs" event={"ID":"05a3a12e-c6b1-4ff3-9926-c9bb85192b03","Type":"ContainerStarted","Data":"b2651e780d9269954ae9cbbf84bbb2187c36a53a31906a5e05387493187fe279"} Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.693973 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" event={"ID":"a30d36bf-bb06-4237-9273-eeee9188e931","Type":"ContainerStarted","Data":"1d8c60cc2e520003001ab839a9ebb6ad6123d8ab12e00cd3857e4b9de4129766"} Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.698180 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jxk8b" event={"ID":"e3852642-f948-4814-8fbd-04301eb7b9c1","Type":"ContainerStarted","Data":"52e6b3c2bde9e0fc4ea51a4efa23b04b6da09197bba17b7e796e711f60e364ac"} Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.706960 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rqjhl" event={"ID":"d2caf342-1317-477b-bf32-eb860c7395c8","Type":"ContainerStarted","Data":"66931048535fff84237a8f4a082d254cd7e3bb1f4fd778e089d701de8f95933d"} Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.710666 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:23 crc kubenswrapper[4870]: E0216 17:02:23.711124 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.211103889 +0000 UTC m=+148.694568273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.715089 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v25cw" event={"ID":"73e4dd57-ca25-46e2-9afb-a67a8d339e67","Type":"ContainerStarted","Data":"9a55aad1c3af1106ca6d6b3e81772c747b94449c54f358a14ebb2f203a184496"} Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.715284 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v25cw" Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.717055 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tcchp" event={"ID":"63e96b44-624a-4d42-b63e-22506f5bd250","Type":"ContainerStarted","Data":"b3d0dd657a4d4f906bf270bed7993ffd450e0a7404a7f281520dd740b4c71d49"} Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.718776 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" event={"ID":"d9ed0cdf-88f2-42cd-93e9-22517410ca31","Type":"ContainerStarted","Data":"e960c3f1e997009c49bfe6aeeee46aa2872a57f240ed237f51979827aa6c0f1c"} Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.719840 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.721421 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-k28wm" podStartSLOduration=7.721405882 podStartE2EDuration="7.721405882s" podCreationTimestamp="2026-02-16 17:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:23.715939127 +0000 UTC m=+148.199403511" watchObservedRunningTime="2026-02-16 17:02:23.721405882 +0000 UTC m=+148.204870266" Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.730345 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4k6sf" event={"ID":"7f83f0b7-935c-4972-83c7-bd9a2d95afc1","Type":"ContainerStarted","Data":"46969a6645783f3e1febad1030bce7fada66350cdd5d16593269de2141a8bf78"} Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.732281 4870 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4jpbt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.732328 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" podUID="d9ed0cdf-88f2-42cd-93e9-22517410ca31" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.735812 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rpzss" event={"ID":"5dc49196-10d0-4e90-8523-8f0d055c5800","Type":"ContainerStarted","Data":"6c3648e7b901b9f5897a3663ecd7fd0127994f027a9934049d46b6e7c804deca"} Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.735861 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rpzss" event={"ID":"5dc49196-10d0-4e90-8523-8f0d055c5800","Type":"ContainerStarted","Data":"69a4316b92b50a4f803e0f6ef534e023c244b24e795ee24f59de6cf870376736"} Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.746458 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8rbb8" event={"ID":"0ae17602-ebe6-41e2-9241-c0552e6a4e7e","Type":"ContainerStarted","Data":"df58ff4ae21790d6af80ba40bfb5827aa4379b8d334b65dc4ee6b2587d766cf3"} Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.747273 4870 patch_prober.go:28] interesting pod/console-operator-58897d9998-8q7hf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.747318 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8q7hf" podUID="6e4c9bd5-e76b-4be9-a6d8-d33b11c312f9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.747425 4870 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-b5nnl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.747461 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b5nnl" podUID="cc3d5e28-d52f-41d9-8360-faa12e014349" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.747539 4870 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jxjrh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.747653 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh" podUID="08d93167-bc2f-4032-9840-f5eda9916ddd" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.747815 4870 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5ll8r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.747892 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" podUID="db804a3b-9f2e-4638-ae79-7ef21a87104d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.749283 4870 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6v8zp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.749787 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp" podUID="986bb24c-5a0b-4fe9-bd99-e48c3477cc45" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.751810 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jfvff" podStartSLOduration=126.751789457 podStartE2EDuration="2m6.751789457s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:23.748361149 +0000 UTC m=+148.231825533" watchObservedRunningTime="2026-02-16 17:02:23.751789457 +0000 UTC m=+148.235253841" Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.816001 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:23 crc kubenswrapper[4870]: E0216 17:02:23.816260 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.31622984 +0000 UTC m=+148.799694224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.816672 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:23 crc kubenswrapper[4870]: E0216 17:02:23.818384 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.318373421 +0000 UTC m=+148.801837805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.849053 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lv95l" podStartSLOduration=126.848924721 podStartE2EDuration="2m6.848924721s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:23.802483609 +0000 UTC m=+148.285948023" watchObservedRunningTime="2026-02-16 17:02:23.848924721 +0000 UTC m=+148.332389105" Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.861316 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pnznr" podStartSLOduration=126.861285353 podStartE2EDuration="2m6.861285353s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:23.856822296 +0000 UTC m=+148.340286680" watchObservedRunningTime="2026-02-16 17:02:23.861285353 +0000 UTC m=+148.344749737" Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.888106 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-zgkhs" podStartSLOduration=126.888087475 podStartE2EDuration="2m6.888087475s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:23.887241021 +0000 UTC m=+148.370705395" watchObservedRunningTime="2026-02-16 17:02:23.888087475 +0000 UTC m=+148.371551859" Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.920007 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:23 crc kubenswrapper[4870]: E0216 17:02:23.920512 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.420478467 +0000 UTC m=+148.903942851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:23 crc kubenswrapper[4870]: I0216 17:02:23.921032 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:23 crc kubenswrapper[4870]: E0216 17:02:23.968865 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.468830903 +0000 UTC m=+148.952295287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.003799 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4k6sf" podStartSLOduration=128.003771557 podStartE2EDuration="2m8.003771557s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:24.003132909 +0000 UTC m=+148.486597303" watchObservedRunningTime="2026-02-16 17:02:24.003771557 +0000 UTC m=+148.487235941" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.005671 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4nxwd" podStartSLOduration=128.005660271 podStartE2EDuration="2m8.005660271s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:23.946894209 +0000 UTC m=+148.430358603" watchObservedRunningTime="2026-02-16 17:02:24.005660271 +0000 UTC m=+148.489124665" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.028530 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:24 crc kubenswrapper[4870]: E0216 17:02:24.029283 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.529258702 +0000 UTC m=+149.012723086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.092743 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" podStartSLOduration=127.092714368 podStartE2EDuration="2m7.092714368s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:24.059618186 +0000 UTC m=+148.543082570" watchObservedRunningTime="2026-02-16 17:02:24.092714368 +0000 UTC m=+148.576178752" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.124822 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rpzss" podStartSLOduration=128.124800351 podStartE2EDuration="2m8.124800351s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:24.096335021 +0000 UTC m=+148.579799405" watchObservedRunningTime="2026-02-16 17:02:24.124800351 +0000 UTC m=+148.608264735" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.130501 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:24 crc kubenswrapper[4870]: E0216 17:02:24.131039 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.631008678 +0000 UTC m=+149.114473062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.150927 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rqjhl" podStartSLOduration=127.150898264 podStartE2EDuration="2m7.150898264s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:24.150873453 +0000 UTC m=+148.634337837" watchObservedRunningTime="2026-02-16 17:02:24.150898264 +0000 UTC m=+148.634362658" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.153777 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v25cw" podStartSLOduration=127.153764525 podStartE2EDuration="2m7.153764525s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:24.129189796 +0000 UTC m=+148.612654190" watchObservedRunningTime="2026-02-16 17:02:24.153764525 +0000 UTC m=+148.637228909" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.190693 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tcchp" podStartSLOduration=127.190667585 podStartE2EDuration="2m7.190667585s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:24.188207055 +0000 UTC m=+148.671671439" watchObservedRunningTime="2026-02-16 17:02:24.190667585 +0000 UTC m=+148.674131969" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.234072 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:24 crc kubenswrapper[4870]: E0216 17:02:24.234639 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.734614676 +0000 UTC m=+149.218079060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.315332 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-8rbb8" podStartSLOduration=127.315304722 podStartE2EDuration="2m7.315304722s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:24.223644394 +0000 UTC m=+148.707108788" watchObservedRunningTime="2026-02-16 17:02:24.315304722 +0000 UTC m=+148.798769106" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.336227 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.336322 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.336351 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.336408 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.336437 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:02:24 crc kubenswrapper[4870]: E0216 17:02:24.337694 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.837666298 +0000 UTC m=+149.321130682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.340233 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-16 16:57:23 +0000 UTC, rotation deadline is 2026-12-01 04:44:12.647517919 +0000 UTC Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.340272 4870 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6899h41m48.307249487s for next certificate rotation Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.349199 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.350528 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.350752 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.364174 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.413824 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" podStartSLOduration=127.413798074 podStartE2EDuration="2m7.413798074s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:24.316593789 +0000 UTC m=+148.800058173" watchObservedRunningTime="2026-02-16 17:02:24.413798074 +0000 UTC m=+148.897262458" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.439499 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:24 crc kubenswrapper[4870]: E0216 17:02:24.440014 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.93999089 +0000 UTC m=+149.423455274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.548920 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.550484 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:24 crc kubenswrapper[4870]: E0216 17:02:24.550977 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.050939957 +0000 UTC m=+149.534404341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.568425 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-9d2ls" podStartSLOduration=127.568389203 podStartE2EDuration="2m7.568389203s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:24.564115932 +0000 UTC m=+149.047580316" watchObservedRunningTime="2026-02-16 17:02:24.568389203 +0000 UTC m=+149.051853597" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.575393 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jxk8b" podStartSLOduration=127.571579184 podStartE2EDuration="2m7.571579184s" podCreationTimestamp="2026-02-16 17:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:24.416339627 +0000 UTC m=+148.899804011" watchObservedRunningTime="2026-02-16 17:02:24.571579184 +0000 UTC m=+149.055043578" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.606133 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:02:24 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Feb 16 17:02:24 crc kubenswrapper[4870]: [+]process-running ok Feb 16 17:02:24 crc kubenswrapper[4870]: healthz check failed Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.606209 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.638552 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.649317 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.652147 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:24 crc kubenswrapper[4870]: E0216 17:02:24.652593 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.152574589 +0000 UTC m=+149.636038973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.754233 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:24 crc kubenswrapper[4870]: E0216 17:02:24.754625 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.254607242 +0000 UTC m=+149.738071636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.785741 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" event={"ID":"559fe54f-c6f9-4466-b9c8-da6318fc8f59","Type":"ContainerStarted","Data":"d3d9b6185cd6f4a8d10c2b2b761c0ec79e8b39796a282621beeff769df717f5c"} Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.816494 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8sggp" event={"ID":"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302","Type":"ContainerStarted","Data":"928c576e882d37a597e34d97f30d70077b5d76bb8bfc2752ab2800e6a45365ec"} Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.829495 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl" event={"ID":"d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4","Type":"ContainerStarted","Data":"bf196e2c58ded1e1948f0c786642d2e91b71650780270333c179da37012cb397"} Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.830696 4870 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4jpbt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.830752 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" podUID="d9ed0cdf-88f2-42cd-93e9-22517410ca31" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.831060 4870 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jxjrh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.831108 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh" podUID="08d93167-bc2f-4032-9840-f5eda9916ddd" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.858159 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:24 crc kubenswrapper[4870]: E0216 17:02:24.858733 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.358710804 +0000 UTC m=+149.842175188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:24 crc kubenswrapper[4870]: I0216 17:02:24.962043 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:24 crc kubenswrapper[4870]: E0216 17:02:24.966601 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.466581434 +0000 UTC m=+149.950045818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.063984 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:25 crc kubenswrapper[4870]: E0216 17:02:25.064387 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.564362576 +0000 UTC m=+150.047826960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.166022 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:25 crc kubenswrapper[4870]: E0216 17:02:25.166533 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.666516913 +0000 UTC m=+150.149981287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.274296 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:25 crc kubenswrapper[4870]: E0216 17:02:25.274989 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.774965279 +0000 UTC m=+150.258429663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.279132 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl" podStartSLOduration=129.279097236 podStartE2EDuration="2m9.279097236s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:24.87647465 +0000 UTC m=+149.359939034" watchObservedRunningTime="2026-02-16 17:02:25.279097236 +0000 UTC m=+149.762561620" Feb 16 17:02:25 crc kubenswrapper[4870]: W0216 17:02:25.312395 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-c3326d121a8cbe1f0b0028565dd7586196a094f02cc387f60fd941a507112a3e WatchSource:0}: Error finding container c3326d121a8cbe1f0b0028565dd7586196a094f02cc387f60fd941a507112a3e: Status 404 returned error can't find the container with id c3326d121a8cbe1f0b0028565dd7586196a094f02cc387f60fd941a507112a3e Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.377179 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:25 crc kubenswrapper[4870]: E0216 17:02:25.377815 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.877775634 +0000 UTC m=+150.361240028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.478232 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:25 crc kubenswrapper[4870]: E0216 17:02:25.478754 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.978730287 +0000 UTC m=+150.462194671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:25 crc kubenswrapper[4870]: W0216 17:02:25.565211 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-7d87e55c843980ccf5c78c17153caad5e74163b33e1c4dffdcfb8e88df2ef58a WatchSource:0}: Error finding container 7d87e55c843980ccf5c78c17153caad5e74163b33e1c4dffdcfb8e88df2ef58a: Status 404 returned error can't find the container with id 7d87e55c843980ccf5c78c17153caad5e74163b33e1c4dffdcfb8e88df2ef58a Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.580338 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:25 crc kubenswrapper[4870]: E0216 17:02:25.580766 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.08075072 +0000 UTC m=+150.564215104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.606241 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:02:25 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Feb 16 17:02:25 crc kubenswrapper[4870]: [+]process-running ok Feb 16 17:02:25 crc kubenswrapper[4870]: healthz check failed Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.606346 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.681248 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:25 crc kubenswrapper[4870]: E0216 17:02:25.681377 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.181355723 +0000 UTC m=+150.664820107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.681690 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:25 crc kubenswrapper[4870]: E0216 17:02:25.682034 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.182025392 +0000 UTC m=+150.665489776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.783079 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:25 crc kubenswrapper[4870]: E0216 17:02:25.783299 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.283260622 +0000 UTC m=+150.766725006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.783482 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:25 crc kubenswrapper[4870]: E0216 17:02:25.783887 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.28387698 +0000 UTC m=+150.767341364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.847747 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e8f84d52dc877f4a25a807e9e958698316bc0d492295efb576d1e26b47cb2196"} Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.847813 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"264b040d32a1b55cf0db00f029e51d3280b55abb9b893830fe42cbcbda12fdff"} Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.848578 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.852264 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"737234a4c68d337ad481a9d529a7a6a6a0a3596449d1b52b3cbade0f6e73dc8c"} Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.852331 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7d87e55c843980ccf5c78c17153caad5e74163b33e1c4dffdcfb8e88df2ef58a"} Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.876792 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" event={"ID":"559fe54f-c6f9-4466-b9c8-da6318fc8f59","Type":"ContainerStarted","Data":"d93ba2c5db64985aa23492bc58f4f3f5f69362c81e1f34b647dff78d0c9f394d"} Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.881665 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3b1231b185785278ae791f7838f8014743c10eea640a9ddf1c61abf612641980"} Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.881760 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c3326d121a8cbe1f0b0028565dd7586196a094f02cc387f60fd941a507112a3e"} Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.882240 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl" Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.882401 4870 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4jpbt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.882453 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" podUID="d9ed0cdf-88f2-42cd-93e9-22517410ca31" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.884628 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:25 crc kubenswrapper[4870]: E0216 17:02:25.884809 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.384776431 +0000 UTC m=+150.868240815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.884968 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:25 crc kubenswrapper[4870]: E0216 17:02:25.885319 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.385307556 +0000 UTC m=+150.868771940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.930237 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" podStartSLOduration=129.930216213 podStartE2EDuration="2m9.930216213s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:25.927698501 +0000 UTC m=+150.411162895" watchObservedRunningTime="2026-02-16 17:02:25.930216213 +0000 UTC m=+150.413680587" Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.986224 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:25 crc kubenswrapper[4870]: E0216 17:02:25.986439 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.486383471 +0000 UTC m=+150.969847865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:25 crc kubenswrapper[4870]: I0216 17:02:25.986581 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:25 crc kubenswrapper[4870]: E0216 17:02:25.987104 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.487093911 +0000 UTC m=+150.970558295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:26 crc kubenswrapper[4870]: I0216 17:02:26.090412 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:26 crc kubenswrapper[4870]: E0216 17:02:26.090651 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.590613947 +0000 UTC m=+151.074078331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:26 crc kubenswrapper[4870]: I0216 17:02:26.091273 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:26 crc kubenswrapper[4870]: E0216 17:02:26.091633 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.591616026 +0000 UTC m=+151.075080410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:26 crc kubenswrapper[4870]: I0216 17:02:26.192469 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:26 crc kubenswrapper[4870]: E0216 17:02:26.192731 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.692694232 +0000 UTC m=+151.176158616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:26 crc kubenswrapper[4870]: I0216 17:02:26.192876 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:26 crc kubenswrapper[4870]: E0216 17:02:26.193277 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.693260438 +0000 UTC m=+151.176724822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:26 crc kubenswrapper[4870]: I0216 17:02:26.293681 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:26 crc kubenswrapper[4870]: E0216 17:02:26.293902 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.793866441 +0000 UTC m=+151.277330825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:26 crc kubenswrapper[4870]: I0216 17:02:26.294114 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:26 crc kubenswrapper[4870]: E0216 17:02:26.294559 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.79455114 +0000 UTC m=+151.278015514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:26 crc kubenswrapper[4870]: I0216 17:02:26.395420 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:26 crc kubenswrapper[4870]: E0216 17:02:26.395709 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.895665947 +0000 UTC m=+151.379130381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:26 crc kubenswrapper[4870]: I0216 17:02:26.496812 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:26 crc kubenswrapper[4870]: E0216 17:02:26.497376 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.997353841 +0000 UTC m=+151.480818225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:26 crc kubenswrapper[4870]: I0216 17:02:26.548133 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:02:26 crc kubenswrapper[4870]: I0216 17:02:26.601863 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:02:26 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Feb 16 17:02:26 crc kubenswrapper[4870]: [+]process-running ok Feb 16 17:02:26 crc kubenswrapper[4870]: healthz check failed Feb 16 17:02:26 crc kubenswrapper[4870]: I0216 17:02:26.601965 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:02:26 crc kubenswrapper[4870]: I0216 17:02:26.602563 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:26 crc kubenswrapper[4870]: E0216 17:02:26.603265 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:27.103238784 +0000 UTC m=+151.586703168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:26 crc kubenswrapper[4870]: I0216 17:02:26.705067 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:26 crc kubenswrapper[4870]: E0216 17:02:26.705709 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:27.205694169 +0000 UTC m=+151.689158553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:26 crc kubenswrapper[4870]: I0216 17:02:26.806173 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:26 crc kubenswrapper[4870]: E0216 17:02:26.806457 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:27.306426376 +0000 UTC m=+151.789890760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:26 crc kubenswrapper[4870]: I0216 17:02:26.806714 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:26 crc kubenswrapper[4870]: E0216 17:02:26.807110 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:27.307094205 +0000 UTC m=+151.790558579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:26 crc kubenswrapper[4870]: I0216 17:02:26.896191 4870 generic.go:334] "Generic (PLEG): container finished" podID="ad671622-917b-4e62-a887-d2d6e0935f2e" containerID="bd8356847a79aea985e806460936709f6752079e00ec36e216e381c5f0178d6a" exitCode=0 Feb 16 17:02:26 crc kubenswrapper[4870]: I0216 17:02:26.896219 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf" event={"ID":"ad671622-917b-4e62-a887-d2d6e0935f2e","Type":"ContainerDied","Data":"bd8356847a79aea985e806460936709f6752079e00ec36e216e381c5f0178d6a"} Feb 16 17:02:26 crc kubenswrapper[4870]: I0216 17:02:26.899270 4870 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-q2xvl container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Feb 16 17:02:26 crc kubenswrapper[4870]: I0216 17:02:26.899446 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl" podUID="d766a774-c0b7-46f8-9f8b-fcf9b5c1dce4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Feb 16 17:02:26 crc kubenswrapper[4870]: I0216 17:02:26.907408 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:26 crc kubenswrapper[4870]: E0216 17:02:26.907780 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:27.407751939 +0000 UTC m=+151.891216323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.009301 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:27 crc kubenswrapper[4870]: E0216 17:02:27.010334 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:27.510308757 +0000 UTC m=+151.993773341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.110365 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:27 crc kubenswrapper[4870]: E0216 17:02:27.111016 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:27.610976492 +0000 UTC m=+152.094440876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.212738 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:27 crc kubenswrapper[4870]: E0216 17:02:27.213240 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:27.713218651 +0000 UTC m=+152.196683035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.219592 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.220515 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.224311 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.228676 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.232553 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.314306 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.314571 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11181780-6d8a-4342-839f-03e0cfcd153a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"11181780-6d8a-4342-839f-03e0cfcd153a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.314601 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11181780-6d8a-4342-839f-03e0cfcd153a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"11181780-6d8a-4342-839f-03e0cfcd153a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 17:02:27 crc kubenswrapper[4870]: E0216 17:02:27.314701 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:27.814681938 +0000 UTC m=+152.298146322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.340383 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w5rt4"] Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.341449 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w5rt4" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.349139 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.366810 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w5rt4"] Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.415483 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11181780-6d8a-4342-839f-03e0cfcd153a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"11181780-6d8a-4342-839f-03e0cfcd153a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.415532 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11181780-6d8a-4342-839f-03e0cfcd153a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"11181780-6d8a-4342-839f-03e0cfcd153a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.415571 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e01b72-44e9-4f22-833e-9972542aca29-utilities\") pod \"community-operators-w5rt4\" (UID: \"53e01b72-44e9-4f22-833e-9972542aca29\") " pod="openshift-marketplace/community-operators-w5rt4" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.415615 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.415653 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e01b72-44e9-4f22-833e-9972542aca29-catalog-content\") pod \"community-operators-w5rt4\" (UID: \"53e01b72-44e9-4f22-833e-9972542aca29\") " pod="openshift-marketplace/community-operators-w5rt4" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.415676 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8czvc\" (UniqueName: \"kubernetes.io/projected/53e01b72-44e9-4f22-833e-9972542aca29-kube-api-access-8czvc\") pod \"community-operators-w5rt4\" (UID: \"53e01b72-44e9-4f22-833e-9972542aca29\") " pod="openshift-marketplace/community-operators-w5rt4" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.416069 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11181780-6d8a-4342-839f-03e0cfcd153a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"11181780-6d8a-4342-839f-03e0cfcd153a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 17:02:27 crc kubenswrapper[4870]: E0216 17:02:27.416344 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:27.916330331 +0000 UTC m=+152.399794715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.488783 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11181780-6d8a-4342-839f-03e0cfcd153a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"11181780-6d8a-4342-839f-03e0cfcd153a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.522777 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.523516 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e01b72-44e9-4f22-833e-9972542aca29-utilities\") pod \"community-operators-w5rt4\" (UID: \"53e01b72-44e9-4f22-833e-9972542aca29\") " pod="openshift-marketplace/community-operators-w5rt4" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.523598 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e01b72-44e9-4f22-833e-9972542aca29-catalog-content\") pod \"community-operators-w5rt4\" (UID: \"53e01b72-44e9-4f22-833e-9972542aca29\") " pod="openshift-marketplace/community-operators-w5rt4" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.523618 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8czvc\" (UniqueName: \"kubernetes.io/projected/53e01b72-44e9-4f22-833e-9972542aca29-kube-api-access-8czvc\") pod \"community-operators-w5rt4\" (UID: \"53e01b72-44e9-4f22-833e-9972542aca29\") " pod="openshift-marketplace/community-operators-w5rt4" Feb 16 17:02:27 crc kubenswrapper[4870]: E0216 17:02:27.524146 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.024127178 +0000 UTC m=+152.507591572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.524702 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e01b72-44e9-4f22-833e-9972542aca29-utilities\") pod \"community-operators-w5rt4\" (UID: \"53e01b72-44e9-4f22-833e-9972542aca29\") " pod="openshift-marketplace/community-operators-w5rt4" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.524968 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e01b72-44e9-4f22-833e-9972542aca29-catalog-content\") pod \"community-operators-w5rt4\" (UID: \"53e01b72-44e9-4f22-833e-9972542aca29\") " pod="openshift-marketplace/community-operators-w5rt4" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.527115 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wdq4b"] Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.528217 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wdq4b" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.536565 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.540147 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.579837 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wdq4b"] Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.580855 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8czvc\" (UniqueName: \"kubernetes.io/projected/53e01b72-44e9-4f22-833e-9972542aca29-kube-api-access-8czvc\") pod \"community-operators-w5rt4\" (UID: \"53e01b72-44e9-4f22-833e-9972542aca29\") " pod="openshift-marketplace/community-operators-w5rt4" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.628128 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.628196 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b63cc22-5778-4805-b6fd-97f2ce43fda1-utilities\") pod \"certified-operators-wdq4b\" (UID: \"2b63cc22-5778-4805-b6fd-97f2ce43fda1\") " pod="openshift-marketplace/certified-operators-wdq4b" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.628217 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snpms\" (UniqueName: \"kubernetes.io/projected/2b63cc22-5778-4805-b6fd-97f2ce43fda1-kube-api-access-snpms\") pod \"certified-operators-wdq4b\" (UID: \"2b63cc22-5778-4805-b6fd-97f2ce43fda1\") " pod="openshift-marketplace/certified-operators-wdq4b" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.628257 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b63cc22-5778-4805-b6fd-97f2ce43fda1-catalog-content\") pod \"certified-operators-wdq4b\" (UID: \"2b63cc22-5778-4805-b6fd-97f2ce43fda1\") " pod="openshift-marketplace/certified-operators-wdq4b" Feb 16 17:02:27 crc kubenswrapper[4870]: E0216 17:02:27.628636 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.128618171 +0000 UTC m=+152.612082555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.640224 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:02:27 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Feb 16 17:02:27 crc kubenswrapper[4870]: [+]process-running ok Feb 16 17:02:27 crc kubenswrapper[4870]: healthz check failed Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.640328 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.655424 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w5rt4" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.729929 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.730255 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b63cc22-5778-4805-b6fd-97f2ce43fda1-utilities\") pod \"certified-operators-wdq4b\" (UID: \"2b63cc22-5778-4805-b6fd-97f2ce43fda1\") " pod="openshift-marketplace/certified-operators-wdq4b" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.730283 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snpms\" (UniqueName: \"kubernetes.io/projected/2b63cc22-5778-4805-b6fd-97f2ce43fda1-kube-api-access-snpms\") pod \"certified-operators-wdq4b\" (UID: \"2b63cc22-5778-4805-b6fd-97f2ce43fda1\") " pod="openshift-marketplace/certified-operators-wdq4b" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.730321 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b63cc22-5778-4805-b6fd-97f2ce43fda1-catalog-content\") pod \"certified-operators-wdq4b\" (UID: \"2b63cc22-5778-4805-b6fd-97f2ce43fda1\") " pod="openshift-marketplace/certified-operators-wdq4b" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.730757 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b63cc22-5778-4805-b6fd-97f2ce43fda1-catalog-content\") pod \"certified-operators-wdq4b\" (UID: \"2b63cc22-5778-4805-b6fd-97f2ce43fda1\") " pod="openshift-marketplace/certified-operators-wdq4b" Feb 16 17:02:27 crc kubenswrapper[4870]: E0216 17:02:27.730834 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.230815929 +0000 UTC m=+152.714280313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.731055 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b63cc22-5778-4805-b6fd-97f2ce43fda1-utilities\") pod \"certified-operators-wdq4b\" (UID: \"2b63cc22-5778-4805-b6fd-97f2ce43fda1\") " pod="openshift-marketplace/certified-operators-wdq4b" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.760801 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tmfnv"] Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.771252 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmfnv" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.810495 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snpms\" (UniqueName: \"kubernetes.io/projected/2b63cc22-5778-4805-b6fd-97f2ce43fda1-kube-api-access-snpms\") pod \"certified-operators-wdq4b\" (UID: \"2b63cc22-5778-4805-b6fd-97f2ce43fda1\") " pod="openshift-marketplace/certified-operators-wdq4b" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.812678 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tmfnv"] Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.838227 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.838286 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/850617f1-446f-44e3-9a83-215215f95cbd-utilities\") pod \"community-operators-tmfnv\" (UID: \"850617f1-446f-44e3-9a83-215215f95cbd\") " pod="openshift-marketplace/community-operators-tmfnv" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.838324 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gs4v\" (UniqueName: \"kubernetes.io/projected/850617f1-446f-44e3-9a83-215215f95cbd-kube-api-access-6gs4v\") pod \"community-operators-tmfnv\" (UID: \"850617f1-446f-44e3-9a83-215215f95cbd\") " pod="openshift-marketplace/community-operators-tmfnv" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.838371 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/850617f1-446f-44e3-9a83-215215f95cbd-catalog-content\") pod \"community-operators-tmfnv\" (UID: \"850617f1-446f-44e3-9a83-215215f95cbd\") " pod="openshift-marketplace/community-operators-tmfnv" Feb 16 17:02:27 crc kubenswrapper[4870]: E0216 17:02:27.838752 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.338734379 +0000 UTC m=+152.822198763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.932726 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8sggp" event={"ID":"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302","Type":"ContainerStarted","Data":"7c6f871801061d4a97ede190d6bc2f53e6266c0480738d514abad2a2d3967a9f"} Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.941647 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.941975 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/850617f1-446f-44e3-9a83-215215f95cbd-catalog-content\") pod \"community-operators-tmfnv\" (UID: \"850617f1-446f-44e3-9a83-215215f95cbd\") " pod="openshift-marketplace/community-operators-tmfnv" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.942060 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/850617f1-446f-44e3-9a83-215215f95cbd-utilities\") pod \"community-operators-tmfnv\" (UID: \"850617f1-446f-44e3-9a83-215215f95cbd\") " pod="openshift-marketplace/community-operators-tmfnv" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.942094 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gs4v\" (UniqueName: \"kubernetes.io/projected/850617f1-446f-44e3-9a83-215215f95cbd-kube-api-access-6gs4v\") pod \"community-operators-tmfnv\" (UID: \"850617f1-446f-44e3-9a83-215215f95cbd\") " pod="openshift-marketplace/community-operators-tmfnv" Feb 16 17:02:27 crc kubenswrapper[4870]: E0216 17:02:27.942263 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.442228614 +0000 UTC m=+152.925692998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.943148 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/850617f1-446f-44e3-9a83-215215f95cbd-catalog-content\") pod \"community-operators-tmfnv\" (UID: \"850617f1-446f-44e3-9a83-215215f95cbd\") " pod="openshift-marketplace/community-operators-tmfnv" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.943455 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/850617f1-446f-44e3-9a83-215215f95cbd-utilities\") pod \"community-operators-tmfnv\" (UID: \"850617f1-446f-44e3-9a83-215215f95cbd\") " pod="openshift-marketplace/community-operators-tmfnv" Feb 16 17:02:27 crc kubenswrapper[4870]: I0216 17:02:27.970548 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wdq4b" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.044123 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:28 crc kubenswrapper[4870]: E0216 17:02:28.044476 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.544461903 +0000 UTC m=+153.027926287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.047660 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gs4v\" (UniqueName: \"kubernetes.io/projected/850617f1-446f-44e3-9a83-215215f95cbd-kube-api-access-6gs4v\") pod \"community-operators-tmfnv\" (UID: \"850617f1-446f-44e3-9a83-215215f95cbd\") " pod="openshift-marketplace/community-operators-tmfnv" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.055303 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-68qjg"] Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.056454 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68qjg" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.082769 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-68qjg"] Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.116164 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.145570 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.145850 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2ghk\" (UniqueName: \"kubernetes.io/projected/cfc3b15d-ea9f-4842-8d24-0af28f83153d-kube-api-access-b2ghk\") pod \"certified-operators-68qjg\" (UID: \"cfc3b15d-ea9f-4842-8d24-0af28f83153d\") " pod="openshift-marketplace/certified-operators-68qjg" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.145920 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfc3b15d-ea9f-4842-8d24-0af28f83153d-catalog-content\") pod \"certified-operators-68qjg\" (UID: \"cfc3b15d-ea9f-4842-8d24-0af28f83153d\") " pod="openshift-marketplace/certified-operators-68qjg" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.145989 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfc3b15d-ea9f-4842-8d24-0af28f83153d-utilities\") pod \"certified-operators-68qjg\" (UID: \"cfc3b15d-ea9f-4842-8d24-0af28f83153d\") " pod="openshift-marketplace/certified-operators-68qjg" Feb 16 17:02:28 crc kubenswrapper[4870]: E0216 17:02:28.146181 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.646143137 +0000 UTC m=+153.129607521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.187321 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmfnv" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.247336 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfc3b15d-ea9f-4842-8d24-0af28f83153d-catalog-content\") pod \"certified-operators-68qjg\" (UID: \"cfc3b15d-ea9f-4842-8d24-0af28f83153d\") " pod="openshift-marketplace/certified-operators-68qjg" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.247400 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfc3b15d-ea9f-4842-8d24-0af28f83153d-utilities\") pod \"certified-operators-68qjg\" (UID: \"cfc3b15d-ea9f-4842-8d24-0af28f83153d\") " pod="openshift-marketplace/certified-operators-68qjg" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.247485 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.247530 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2ghk\" (UniqueName: \"kubernetes.io/projected/cfc3b15d-ea9f-4842-8d24-0af28f83153d-kube-api-access-b2ghk\") pod \"certified-operators-68qjg\" (UID: \"cfc3b15d-ea9f-4842-8d24-0af28f83153d\") " pod="openshift-marketplace/certified-operators-68qjg" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.252889 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfc3b15d-ea9f-4842-8d24-0af28f83153d-utilities\") pod \"certified-operators-68qjg\" (UID: \"cfc3b15d-ea9f-4842-8d24-0af28f83153d\") " pod="openshift-marketplace/certified-operators-68qjg" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.253678 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfc3b15d-ea9f-4842-8d24-0af28f83153d-catalog-content\") pod \"certified-operators-68qjg\" (UID: \"cfc3b15d-ea9f-4842-8d24-0af28f83153d\") " pod="openshift-marketplace/certified-operators-68qjg" Feb 16 17:02:28 crc kubenswrapper[4870]: E0216 17:02:28.254086 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.754066318 +0000 UTC m=+153.237530772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.267139 4870 patch_prober.go:28] interesting pod/downloads-7954f5f757-fplj9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.267205 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fplj9" podUID="6cb09d74-7044-4c8c-a89b-6bf4593ffb9d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.267206 4870 patch_prober.go:28] interesting pod/downloads-7954f5f757-fplj9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.267255 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fplj9" podUID="6cb09d74-7044-4c8c-a89b-6bf4593ffb9d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.273241 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.290563 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-q2xvl" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.291754 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.291772 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.296306 4870 patch_prober.go:28] interesting pod/console-f9d7485db-n96b6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.296640 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-n96b6" podUID="ed053e72-4999-4b5d-a9f3-c58b92280c8c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.313156 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2ghk\" (UniqueName: \"kubernetes.io/projected/cfc3b15d-ea9f-4842-8d24-0af28f83153d-kube-api-access-b2ghk\") pod \"certified-operators-68qjg\" (UID: \"cfc3b15d-ea9f-4842-8d24-0af28f83153d\") " pod="openshift-marketplace/certified-operators-68qjg" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.348881 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:28 crc kubenswrapper[4870]: E0216 17:02:28.350703 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.850679077 +0000 UTC m=+153.334143471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.385891 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68qjg" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.422477 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.422524 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.430476 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.430514 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.432075 4870 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cnjq9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.432132 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" podUID="559fe54f-c6f9-4466-b9c8-da6318fc8f59" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.448220 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.451050 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:28 crc kubenswrapper[4870]: E0216 17:02:28.451620 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.951598429 +0000 UTC m=+153.435062813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.547338 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.552378 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:28 crc kubenswrapper[4870]: E0216 17:02:28.554224 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:29.054207528 +0000 UTC m=+153.537671912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.590125 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6v8zp" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.598831 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-b66lf" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.654344 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.655296 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b5nnl" Feb 16 17:02:28 crc kubenswrapper[4870]: E0216 17:02:28.658324 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:29.15830385 +0000 UTC m=+153.641768234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.697514 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-8q7hf" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.771890 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.772514 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jxjrh" Feb 16 17:02:28 crc kubenswrapper[4870]: E0216 17:02:28.774846 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:29.274815806 +0000 UTC m=+153.758280190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.876073 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:28 crc kubenswrapper[4870]: E0216 17:02:28.877054 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:29.377035075 +0000 UTC m=+153.860499459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.883783 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.899863 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:02:28 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Feb 16 17:02:28 crc kubenswrapper[4870]: [+]process-running ok Feb 16 17:02:28 crc kubenswrapper[4870]: healthz check failed Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.900630 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.941286 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8sggp" event={"ID":"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302","Type":"ContainerStarted","Data":"3a6babe72b23e70ea3615205e037cc39c33d5f1962d8d8635972e3c9beb10a9b"} Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.948202 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh2p8" Feb 16 17:02:28 crc kubenswrapper[4870]: I0216 17:02:28.983768 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:28 crc kubenswrapper[4870]: E0216 17:02:28.984810 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:29.48476706 +0000 UTC m=+153.968231464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.074619 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w5rt4"] Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.087067 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:29 crc kubenswrapper[4870]: E0216 17:02:29.087503 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:29.587488003 +0000 UTC m=+154.070952387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.162832 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wdq4b"] Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.189307 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:29 crc kubenswrapper[4870]: E0216 17:02:29.189781 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:29.689736503 +0000 UTC m=+154.173200887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.199814 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tmfnv"] Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.212361 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf" Feb 16 17:02:29 crc kubenswrapper[4870]: W0216 17:02:29.233838 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod850617f1_446f_44e3_9a83_215215f95cbd.slice/crio-006a28e07e4eeaf1bd147e1d2520fcac5467356c5bdb70e2aecf991f9d82f18d WatchSource:0}: Error finding container 006a28e07e4eeaf1bd147e1d2520fcac5467356c5bdb70e2aecf991f9d82f18d: Status 404 returned error can't find the container with id 006a28e07e4eeaf1bd147e1d2520fcac5467356c5bdb70e2aecf991f9d82f18d Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.235196 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 17:02:29 crc kubenswrapper[4870]: W0216 17:02:29.257372 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod11181780_6d8a_4342_839f_03e0cfcd153a.slice/crio-7010cea158a9adfc7fe4ce29f4d86c30eab7b783184097fa83e27405cca295c3 WatchSource:0}: Error finding container 7010cea158a9adfc7fe4ce29f4d86c30eab7b783184097fa83e27405cca295c3: Status 404 returned error can't find the container with id 7010cea158a9adfc7fe4ce29f4d86c30eab7b783184097fa83e27405cca295c3 Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.290706 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t7w5\" (UniqueName: \"kubernetes.io/projected/ad671622-917b-4e62-a887-d2d6e0935f2e-kube-api-access-8t7w5\") pod \"ad671622-917b-4e62-a887-d2d6e0935f2e\" (UID: \"ad671622-917b-4e62-a887-d2d6e0935f2e\") " Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.290796 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad671622-917b-4e62-a887-d2d6e0935f2e-config-volume\") pod \"ad671622-917b-4e62-a887-d2d6e0935f2e\" (UID: \"ad671622-917b-4e62-a887-d2d6e0935f2e\") " Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.290843 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad671622-917b-4e62-a887-d2d6e0935f2e-secret-volume\") pod \"ad671622-917b-4e62-a887-d2d6e0935f2e\" (UID: \"ad671622-917b-4e62-a887-d2d6e0935f2e\") " Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.291343 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:29 crc kubenswrapper[4870]: E0216 17:02:29.291797 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:29.791778466 +0000 UTC m=+154.275242850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.292974 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad671622-917b-4e62-a887-d2d6e0935f2e-config-volume" (OuterVolumeSpecName: "config-volume") pod "ad671622-917b-4e62-a887-d2d6e0935f2e" (UID: "ad671622-917b-4e62-a887-d2d6e0935f2e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.302917 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad671622-917b-4e62-a887-d2d6e0935f2e-kube-api-access-8t7w5" (OuterVolumeSpecName: "kube-api-access-8t7w5") pod "ad671622-917b-4e62-a887-d2d6e0935f2e" (UID: "ad671622-917b-4e62-a887-d2d6e0935f2e"). InnerVolumeSpecName "kube-api-access-8t7w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.304085 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad671622-917b-4e62-a887-d2d6e0935f2e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ad671622-917b-4e62-a887-d2d6e0935f2e" (UID: "ad671622-917b-4e62-a887-d2d6e0935f2e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.317868 4870 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.330408 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qrlg6"] Feb 16 17:02:29 crc kubenswrapper[4870]: E0216 17:02:29.330842 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad671622-917b-4e62-a887-d2d6e0935f2e" containerName="collect-profiles" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.333826 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad671622-917b-4e62-a887-d2d6e0935f2e" containerName="collect-profiles" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.334197 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad671622-917b-4e62-a887-d2d6e0935f2e" containerName="collect-profiles" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.335514 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qrlg6" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.340476 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.346267 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qrlg6"] Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.392144 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:29 crc kubenswrapper[4870]: E0216 17:02:29.392396 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:29.892349098 +0000 UTC m=+154.375813482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.393095 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf-utilities\") pod \"redhat-marketplace-qrlg6\" (UID: \"c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf\") " pod="openshift-marketplace/redhat-marketplace-qrlg6" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.393280 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wv4c\" (UniqueName: \"kubernetes.io/projected/c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf-kube-api-access-7wv4c\") pod \"redhat-marketplace-qrlg6\" (UID: \"c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf\") " pod="openshift-marketplace/redhat-marketplace-qrlg6" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.393385 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.393628 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf-catalog-content\") pod \"redhat-marketplace-qrlg6\" (UID: \"c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf\") " pod="openshift-marketplace/redhat-marketplace-qrlg6" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.393818 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t7w5\" (UniqueName: \"kubernetes.io/projected/ad671622-917b-4e62-a887-d2d6e0935f2e-kube-api-access-8t7w5\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:29 crc kubenswrapper[4870]: E0216 17:02:29.393817 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 17:02:29.893798569 +0000 UTC m=+154.377263133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cf6bm" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.393864 4870 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad671622-917b-4e62-a887-d2d6e0935f2e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.393878 4870 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad671622-917b-4e62-a887-d2d6e0935f2e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.413127 4870 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-16T17:02:29.31791735Z","Handler":null,"Name":""} Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.417988 4870 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.418202 4870 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.495206 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.495245 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-68qjg"] Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.495492 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf-catalog-content\") pod \"redhat-marketplace-qrlg6\" (UID: \"c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf\") " pod="openshift-marketplace/redhat-marketplace-qrlg6" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.495577 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf-utilities\") pod \"redhat-marketplace-qrlg6\" (UID: \"c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf\") " pod="openshift-marketplace/redhat-marketplace-qrlg6" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.495616 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wv4c\" (UniqueName: \"kubernetes.io/projected/c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf-kube-api-access-7wv4c\") pod \"redhat-marketplace-qrlg6\" (UID: \"c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf\") " pod="openshift-marketplace/redhat-marketplace-qrlg6" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.496749 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf-catalog-content\") pod \"redhat-marketplace-qrlg6\" (UID: \"c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf\") " pod="openshift-marketplace/redhat-marketplace-qrlg6" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.498176 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf-utilities\") pod \"redhat-marketplace-qrlg6\" (UID: \"c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf\") " pod="openshift-marketplace/redhat-marketplace-qrlg6" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.502143 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.532931 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wv4c\" (UniqueName: \"kubernetes.io/projected/c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf-kube-api-access-7wv4c\") pod \"redhat-marketplace-qrlg6\" (UID: \"c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf\") " pod="openshift-marketplace/redhat-marketplace-qrlg6" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.598293 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.600728 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:02:29 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Feb 16 17:02:29 crc kubenswrapper[4870]: [+]process-running ok Feb 16 17:02:29 crc kubenswrapper[4870]: healthz check failed Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.601116 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.602703 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qrlg6" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.609342 4870 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.609512 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.676653 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cf6bm\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.725338 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.739505 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cdx44"] Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.786047 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdx44"] Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.786421 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cdx44" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.914720 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11e0dfd3-85a3-45b6-889d-31159c5a23cb-catalog-content\") pod \"redhat-marketplace-cdx44\" (UID: \"11e0dfd3-85a3-45b6-889d-31159c5a23cb\") " pod="openshift-marketplace/redhat-marketplace-cdx44" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.914804 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plgt6\" (UniqueName: \"kubernetes.io/projected/11e0dfd3-85a3-45b6-889d-31159c5a23cb-kube-api-access-plgt6\") pod \"redhat-marketplace-cdx44\" (UID: \"11e0dfd3-85a3-45b6-889d-31159c5a23cb\") " pod="openshift-marketplace/redhat-marketplace-cdx44" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.914842 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11e0dfd3-85a3-45b6-889d-31159c5a23cb-utilities\") pod \"redhat-marketplace-cdx44\" (UID: \"11e0dfd3-85a3-45b6-889d-31159c5a23cb\") " pod="openshift-marketplace/redhat-marketplace-cdx44" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.949302 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf" event={"ID":"ad671622-917b-4e62-a887-d2d6e0935f2e","Type":"ContainerDied","Data":"55a6ab5af4c26d2f5b24e2f91cef331398123d76f53880c6d570dc82cb4a6e72"} Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.949612 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55a6ab5af4c26d2f5b24e2f91cef331398123d76f53880c6d570dc82cb4a6e72" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.949766 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf" Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.952580 4870 generic.go:334] "Generic (PLEG): container finished" podID="850617f1-446f-44e3-9a83-215215f95cbd" containerID="3eaca3f7360d9841ada91c098b276c30d3c29d002b30dd11903c8cf584353594" exitCode=0 Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.952658 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmfnv" event={"ID":"850617f1-446f-44e3-9a83-215215f95cbd","Type":"ContainerDied","Data":"3eaca3f7360d9841ada91c098b276c30d3c29d002b30dd11903c8cf584353594"} Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.952695 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmfnv" event={"ID":"850617f1-446f-44e3-9a83-215215f95cbd","Type":"ContainerStarted","Data":"006a28e07e4eeaf1bd147e1d2520fcac5467356c5bdb70e2aecf991f9d82f18d"} Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.954667 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.970155 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"11181780-6d8a-4342-839f-03e0cfcd153a","Type":"ContainerStarted","Data":"3bb6b6b1cc43eddad2c0ab56465dedec9eedb838d91a0fcab2ec1125b1091ae4"} Feb 16 17:02:29 crc kubenswrapper[4870]: I0216 17:02:29.970207 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"11181780-6d8a-4342-839f-03e0cfcd153a","Type":"ContainerStarted","Data":"7010cea158a9adfc7fe4ce29f4d86c30eab7b783184097fa83e27405cca295c3"} Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:29.994600 4870 generic.go:334] "Generic (PLEG): container finished" podID="53e01b72-44e9-4f22-833e-9972542aca29" containerID="57ecd99fb4d92f4aed47ae16670d3e913e224c57a5c26d4ee53fe07fc42103bc" exitCode=0 Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:29.994695 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5rt4" event={"ID":"53e01b72-44e9-4f22-833e-9972542aca29","Type":"ContainerDied","Data":"57ecd99fb4d92f4aed47ae16670d3e913e224c57a5c26d4ee53fe07fc42103bc"} Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:29.995014 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5rt4" event={"ID":"53e01b72-44e9-4f22-833e-9972542aca29","Type":"ContainerStarted","Data":"cadf72f99151c2fdfff4d03fc6c556a3bdaeb8714a2a9d7c7dbca6efbf9a6312"} Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.016854 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11e0dfd3-85a3-45b6-889d-31159c5a23cb-catalog-content\") pod \"redhat-marketplace-cdx44\" (UID: \"11e0dfd3-85a3-45b6-889d-31159c5a23cb\") " pod="openshift-marketplace/redhat-marketplace-cdx44" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.016955 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plgt6\" (UniqueName: \"kubernetes.io/projected/11e0dfd3-85a3-45b6-889d-31159c5a23cb-kube-api-access-plgt6\") pod \"redhat-marketplace-cdx44\" (UID: \"11e0dfd3-85a3-45b6-889d-31159c5a23cb\") " pod="openshift-marketplace/redhat-marketplace-cdx44" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.017001 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11e0dfd3-85a3-45b6-889d-31159c5a23cb-utilities\") pod \"redhat-marketplace-cdx44\" (UID: \"11e0dfd3-85a3-45b6-889d-31159c5a23cb\") " pod="openshift-marketplace/redhat-marketplace-cdx44" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.018438 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11e0dfd3-85a3-45b6-889d-31159c5a23cb-utilities\") pod \"redhat-marketplace-cdx44\" (UID: \"11e0dfd3-85a3-45b6-889d-31159c5a23cb\") " pod="openshift-marketplace/redhat-marketplace-cdx44" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.019427 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11e0dfd3-85a3-45b6-889d-31159c5a23cb-catalog-content\") pod \"redhat-marketplace-cdx44\" (UID: \"11e0dfd3-85a3-45b6-889d-31159c5a23cb\") " pod="openshift-marketplace/redhat-marketplace-cdx44" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.024214 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8sggp" event={"ID":"46ad0255-25ad-4b1b-9a88-3a2c8f3eb302","Type":"ContainerStarted","Data":"e1ae1c559b00bfc92065d5f2a924a17b0be3e5428365735730c740418baced70"} Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.030522 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.030505006 podStartE2EDuration="3.030505006s" podCreationTimestamp="2026-02-16 17:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:30.026834961 +0000 UTC m=+154.510299355" watchObservedRunningTime="2026-02-16 17:02:30.030505006 +0000 UTC m=+154.513969390" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.045531 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68qjg" event={"ID":"cfc3b15d-ea9f-4842-8d24-0af28f83153d","Type":"ContainerStarted","Data":"ff290cb6fe4e36631966f53950a38e92601decbaa8ddc314a49aee62aa835eb5"} Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.045588 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68qjg" event={"ID":"cfc3b15d-ea9f-4842-8d24-0af28f83153d","Type":"ContainerStarted","Data":"b2774b6b11a661f9b5110b05e857fa9ed2e2be90c8bcb346f1bf5e2c850f8c05"} Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.056694 4870 generic.go:334] "Generic (PLEG): container finished" podID="2b63cc22-5778-4805-b6fd-97f2ce43fda1" containerID="36aa45dad41518901c3fbddcc07773bd29aea24edb016cb1be344ac4a43aea7b" exitCode=0 Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.058440 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdq4b" event={"ID":"2b63cc22-5778-4805-b6fd-97f2ce43fda1","Type":"ContainerDied","Data":"36aa45dad41518901c3fbddcc07773bd29aea24edb016cb1be344ac4a43aea7b"} Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.058474 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdq4b" event={"ID":"2b63cc22-5778-4805-b6fd-97f2ce43fda1","Type":"ContainerStarted","Data":"714343a040c580bde319d4e559d491e8e0261bdeef3089d5989689863ff8f4fc"} Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.067016 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plgt6\" (UniqueName: \"kubernetes.io/projected/11e0dfd3-85a3-45b6-889d-31159c5a23cb-kube-api-access-plgt6\") pod \"redhat-marketplace-cdx44\" (UID: \"11e0dfd3-85a3-45b6-889d-31159c5a23cb\") " pod="openshift-marketplace/redhat-marketplace-cdx44" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.097203 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qrlg6"] Feb 16 17:02:30 crc kubenswrapper[4870]: W0216 17:02:30.100872 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8a6b6ab_b5ec_4627_9a9f_8a5623d442cf.slice/crio-6d225f44bc887afe6f09fb5c08e37f1890846a608c16fa3696839bf688fa5c12 WatchSource:0}: Error finding container 6d225f44bc887afe6f09fb5c08e37f1890846a608c16fa3696839bf688fa5c12: Status 404 returned error can't find the container with id 6d225f44bc887afe6f09fb5c08e37f1890846a608c16fa3696839bf688fa5c12 Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.120304 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-8sggp" podStartSLOduration=15.12027945 podStartE2EDuration="15.12027945s" podCreationTimestamp="2026-02-16 17:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:30.118733606 +0000 UTC m=+154.602197990" watchObservedRunningTime="2026-02-16 17:02:30.12027945 +0000 UTC m=+154.603743844" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.317054 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cdx44" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.384323 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.396429 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cf6bm"] Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.577294 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.578395 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.582324 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.592466 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.592704 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.601640 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:02:30 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Feb 16 17:02:30 crc kubenswrapper[4870]: [+]process-running ok Feb 16 17:02:30 crc kubenswrapper[4870]: healthz check failed Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.601745 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.633407 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18097b1b-64e3-437f-94f4-0e23ceae2bd4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"18097b1b-64e3-437f-94f4-0e23ceae2bd4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.633998 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18097b1b-64e3-437f-94f4-0e23ceae2bd4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"18097b1b-64e3-437f-94f4-0e23ceae2bd4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.655173 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdx44"] Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.722306 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nr8mf"] Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.723848 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nr8mf" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.727042 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.735887 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nr8mf"] Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.736156 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18097b1b-64e3-437f-94f4-0e23ceae2bd4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"18097b1b-64e3-437f-94f4-0e23ceae2bd4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.736389 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18097b1b-64e3-437f-94f4-0e23ceae2bd4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"18097b1b-64e3-437f-94f4-0e23ceae2bd4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.736594 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18097b1b-64e3-437f-94f4-0e23ceae2bd4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"18097b1b-64e3-437f-94f4-0e23ceae2bd4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.789239 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18097b1b-64e3-437f-94f4-0e23ceae2bd4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"18097b1b-64e3-437f-94f4-0e23ceae2bd4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.837821 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/588600bc-c342-4b4a-a755-0d8b541f0ca1-utilities\") pod \"redhat-operators-nr8mf\" (UID: \"588600bc-c342-4b4a-a755-0d8b541f0ca1\") " pod="openshift-marketplace/redhat-operators-nr8mf" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.838156 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skkfx\" (UniqueName: \"kubernetes.io/projected/588600bc-c342-4b4a-a755-0d8b541f0ca1-kube-api-access-skkfx\") pod \"redhat-operators-nr8mf\" (UID: \"588600bc-c342-4b4a-a755-0d8b541f0ca1\") " pod="openshift-marketplace/redhat-operators-nr8mf" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.838238 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/588600bc-c342-4b4a-a755-0d8b541f0ca1-catalog-content\") pod \"redhat-operators-nr8mf\" (UID: \"588600bc-c342-4b4a-a755-0d8b541f0ca1\") " pod="openshift-marketplace/redhat-operators-nr8mf" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.940040 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skkfx\" (UniqueName: \"kubernetes.io/projected/588600bc-c342-4b4a-a755-0d8b541f0ca1-kube-api-access-skkfx\") pod \"redhat-operators-nr8mf\" (UID: \"588600bc-c342-4b4a-a755-0d8b541f0ca1\") " pod="openshift-marketplace/redhat-operators-nr8mf" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.940124 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/588600bc-c342-4b4a-a755-0d8b541f0ca1-catalog-content\") pod \"redhat-operators-nr8mf\" (UID: \"588600bc-c342-4b4a-a755-0d8b541f0ca1\") " pod="openshift-marketplace/redhat-operators-nr8mf" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.940177 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/588600bc-c342-4b4a-a755-0d8b541f0ca1-utilities\") pod \"redhat-operators-nr8mf\" (UID: \"588600bc-c342-4b4a-a755-0d8b541f0ca1\") " pod="openshift-marketplace/redhat-operators-nr8mf" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.942444 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/588600bc-c342-4b4a-a755-0d8b541f0ca1-catalog-content\") pod \"redhat-operators-nr8mf\" (UID: \"588600bc-c342-4b4a-a755-0d8b541f0ca1\") " pod="openshift-marketplace/redhat-operators-nr8mf" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.942500 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/588600bc-c342-4b4a-a755-0d8b541f0ca1-utilities\") pod \"redhat-operators-nr8mf\" (UID: \"588600bc-c342-4b4a-a755-0d8b541f0ca1\") " pod="openshift-marketplace/redhat-operators-nr8mf" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.965305 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skkfx\" (UniqueName: \"kubernetes.io/projected/588600bc-c342-4b4a-a755-0d8b541f0ca1-kube-api-access-skkfx\") pod \"redhat-operators-nr8mf\" (UID: \"588600bc-c342-4b4a-a755-0d8b541f0ca1\") " pod="openshift-marketplace/redhat-operators-nr8mf" Feb 16 17:02:30 crc kubenswrapper[4870]: I0216 17:02:30.982928 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.075775 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" event={"ID":"e6bf0f44-e205-4b3c-8360-a9578c67459f","Type":"ContainerStarted","Data":"f3f558d593d108e138747a264274f3c88244fb746de4bc8bda93aa3003706909"} Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.076126 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" event={"ID":"e6bf0f44-e205-4b3c-8360-a9578c67459f","Type":"ContainerStarted","Data":"f15deb91dc680e9f0ded4deea9cc421d10367fb62eb2482d9db355d926a0d5bc"} Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.076319 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.083302 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nr8mf" Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.093413 4870 generic.go:334] "Generic (PLEG): container finished" podID="11e0dfd3-85a3-45b6-889d-31159c5a23cb" containerID="889871adcacd8d65757b1085f48371e28562dae726c037d686b616dc3bce395c" exitCode=0 Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.093565 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdx44" event={"ID":"11e0dfd3-85a3-45b6-889d-31159c5a23cb","Type":"ContainerDied","Data":"889871adcacd8d65757b1085f48371e28562dae726c037d686b616dc3bce395c"} Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.093605 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdx44" event={"ID":"11e0dfd3-85a3-45b6-889d-31159c5a23cb","Type":"ContainerStarted","Data":"95335fc09dfdce2b51d0d5dca69e64765be5a72c92668312f33e8944d05d7ce5"} Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.105413 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" podStartSLOduration=135.105372901 podStartE2EDuration="2m15.105372901s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:31.104728243 +0000 UTC m=+155.588192637" watchObservedRunningTime="2026-02-16 17:02:31.105372901 +0000 UTC m=+155.588837285" Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.112434 4870 generic.go:334] "Generic (PLEG): container finished" podID="11181780-6d8a-4342-839f-03e0cfcd153a" containerID="3bb6b6b1cc43eddad2c0ab56465dedec9eedb838d91a0fcab2ec1125b1091ae4" exitCode=0 Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.112440 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"11181780-6d8a-4342-839f-03e0cfcd153a","Type":"ContainerDied","Data":"3bb6b6b1cc43eddad2c0ab56465dedec9eedb838d91a0fcab2ec1125b1091ae4"} Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.128205 4870 generic.go:334] "Generic (PLEG): container finished" podID="cfc3b15d-ea9f-4842-8d24-0af28f83153d" containerID="ff290cb6fe4e36631966f53950a38e92601decbaa8ddc314a49aee62aa835eb5" exitCode=0 Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.128301 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68qjg" event={"ID":"cfc3b15d-ea9f-4842-8d24-0af28f83153d","Type":"ContainerDied","Data":"ff290cb6fe4e36631966f53950a38e92601decbaa8ddc314a49aee62aa835eb5"} Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.131297 4870 generic.go:334] "Generic (PLEG): container finished" podID="c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf" containerID="f797f268ebd8d7c60e724942dfa666d810c4f1fa4e12cd69aee56ae191ad4ea2" exitCode=0 Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.131498 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qrlg6" event={"ID":"c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf","Type":"ContainerDied","Data":"f797f268ebd8d7c60e724942dfa666d810c4f1fa4e12cd69aee56ae191ad4ea2"} Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.131555 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qrlg6" event={"ID":"c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf","Type":"ContainerStarted","Data":"6d225f44bc887afe6f09fb5c08e37f1890846a608c16fa3696839bf688fa5c12"} Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.134482 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jq6pt"] Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.137878 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jq6pt" Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.147547 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jq6pt"] Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.248768 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b75fc0db-cff7-4c59-8019-e98bc08b1a0c-catalog-content\") pod \"redhat-operators-jq6pt\" (UID: \"b75fc0db-cff7-4c59-8019-e98bc08b1a0c\") " pod="openshift-marketplace/redhat-operators-jq6pt" Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.249215 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bcrx\" (UniqueName: \"kubernetes.io/projected/b75fc0db-cff7-4c59-8019-e98bc08b1a0c-kube-api-access-8bcrx\") pod \"redhat-operators-jq6pt\" (UID: \"b75fc0db-cff7-4c59-8019-e98bc08b1a0c\") " pod="openshift-marketplace/redhat-operators-jq6pt" Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.249330 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b75fc0db-cff7-4c59-8019-e98bc08b1a0c-utilities\") pod \"redhat-operators-jq6pt\" (UID: \"b75fc0db-cff7-4c59-8019-e98bc08b1a0c\") " pod="openshift-marketplace/redhat-operators-jq6pt" Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.350416 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b75fc0db-cff7-4c59-8019-e98bc08b1a0c-utilities\") pod \"redhat-operators-jq6pt\" (UID: \"b75fc0db-cff7-4c59-8019-e98bc08b1a0c\") " pod="openshift-marketplace/redhat-operators-jq6pt" Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.350490 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b75fc0db-cff7-4c59-8019-e98bc08b1a0c-catalog-content\") pod \"redhat-operators-jq6pt\" (UID: \"b75fc0db-cff7-4c59-8019-e98bc08b1a0c\") " pod="openshift-marketplace/redhat-operators-jq6pt" Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.350630 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bcrx\" (UniqueName: \"kubernetes.io/projected/b75fc0db-cff7-4c59-8019-e98bc08b1a0c-kube-api-access-8bcrx\") pod \"redhat-operators-jq6pt\" (UID: \"b75fc0db-cff7-4c59-8019-e98bc08b1a0c\") " pod="openshift-marketplace/redhat-operators-jq6pt" Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.350983 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b75fc0db-cff7-4c59-8019-e98bc08b1a0c-utilities\") pod \"redhat-operators-jq6pt\" (UID: \"b75fc0db-cff7-4c59-8019-e98bc08b1a0c\") " pod="openshift-marketplace/redhat-operators-jq6pt" Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.359548 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b75fc0db-cff7-4c59-8019-e98bc08b1a0c-catalog-content\") pod \"redhat-operators-jq6pt\" (UID: \"b75fc0db-cff7-4c59-8019-e98bc08b1a0c\") " pod="openshift-marketplace/redhat-operators-jq6pt" Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.375921 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bcrx\" (UniqueName: \"kubernetes.io/projected/b75fc0db-cff7-4c59-8019-e98bc08b1a0c-kube-api-access-8bcrx\") pod \"redhat-operators-jq6pt\" (UID: \"b75fc0db-cff7-4c59-8019-e98bc08b1a0c\") " pod="openshift-marketplace/redhat-operators-jq6pt" Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.474560 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jq6pt" Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.510768 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.520598 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nr8mf"] Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.604819 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:02:31 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Feb 16 17:02:31 crc kubenswrapper[4870]: [+]process-running ok Feb 16 17:02:31 crc kubenswrapper[4870]: healthz check failed Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.604891 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:02:31 crc kubenswrapper[4870]: W0216 17:02:31.607261 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod18097b1b_64e3_437f_94f4_0e23ceae2bd4.slice/crio-7fcc460db3fc73ccd936723569096fd5596952a7f7f9700eb639f2bbd56bff6f WatchSource:0}: Error finding container 7fcc460db3fc73ccd936723569096fd5596952a7f7f9700eb639f2bbd56bff6f: Status 404 returned error can't find the container with id 7fcc460db3fc73ccd936723569096fd5596952a7f7f9700eb639f2bbd56bff6f Feb 16 17:02:31 crc kubenswrapper[4870]: I0216 17:02:31.861522 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jq6pt"] Feb 16 17:02:31 crc kubenswrapper[4870]: W0216 17:02:31.881502 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb75fc0db_cff7_4c59_8019_e98bc08b1a0c.slice/crio-5b738b294c7bb2c5d26f0abe8d75dd591bcaf38bcc12389126782cd3ea391316 WatchSource:0}: Error finding container 5b738b294c7bb2c5d26f0abe8d75dd591bcaf38bcc12389126782cd3ea391316: Status 404 returned error can't find the container with id 5b738b294c7bb2c5d26f0abe8d75dd591bcaf38bcc12389126782cd3ea391316 Feb 16 17:02:32 crc kubenswrapper[4870]: I0216 17:02:32.150686 4870 generic.go:334] "Generic (PLEG): container finished" podID="588600bc-c342-4b4a-a755-0d8b541f0ca1" containerID="92665c2a66d1e4f722f998067f847933d66b236f415d91e02390047c4846301d" exitCode=0 Feb 16 17:02:32 crc kubenswrapper[4870]: I0216 17:02:32.150928 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nr8mf" event={"ID":"588600bc-c342-4b4a-a755-0d8b541f0ca1","Type":"ContainerDied","Data":"92665c2a66d1e4f722f998067f847933d66b236f415d91e02390047c4846301d"} Feb 16 17:02:32 crc kubenswrapper[4870]: I0216 17:02:32.151279 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nr8mf" event={"ID":"588600bc-c342-4b4a-a755-0d8b541f0ca1","Type":"ContainerStarted","Data":"556c2dfe2653a345edcbf966eb559526e0d7740ea4b5cf080e41724544f1bc4b"} Feb 16 17:02:32 crc kubenswrapper[4870]: I0216 17:02:32.156566 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jq6pt" event={"ID":"b75fc0db-cff7-4c59-8019-e98bc08b1a0c","Type":"ContainerStarted","Data":"5b738b294c7bb2c5d26f0abe8d75dd591bcaf38bcc12389126782cd3ea391316"} Feb 16 17:02:32 crc kubenswrapper[4870]: I0216 17:02:32.162542 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"18097b1b-64e3-437f-94f4-0e23ceae2bd4","Type":"ContainerStarted","Data":"7fcc460db3fc73ccd936723569096fd5596952a7f7f9700eb639f2bbd56bff6f"} Feb 16 17:02:32 crc kubenswrapper[4870]: I0216 17:02:32.469909 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 17:02:32 crc kubenswrapper[4870]: I0216 17:02:32.600653 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:02:32 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Feb 16 17:02:32 crc kubenswrapper[4870]: [+]process-running ok Feb 16 17:02:32 crc kubenswrapper[4870]: healthz check failed Feb 16 17:02:32 crc kubenswrapper[4870]: I0216 17:02:32.600723 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:02:32 crc kubenswrapper[4870]: I0216 17:02:32.620744 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11181780-6d8a-4342-839f-03e0cfcd153a-kubelet-dir\") pod \"11181780-6d8a-4342-839f-03e0cfcd153a\" (UID: \"11181780-6d8a-4342-839f-03e0cfcd153a\") " Feb 16 17:02:32 crc kubenswrapper[4870]: I0216 17:02:32.620825 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11181780-6d8a-4342-839f-03e0cfcd153a-kube-api-access\") pod \"11181780-6d8a-4342-839f-03e0cfcd153a\" (UID: \"11181780-6d8a-4342-839f-03e0cfcd153a\") " Feb 16 17:02:32 crc kubenswrapper[4870]: I0216 17:02:32.620913 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11181780-6d8a-4342-839f-03e0cfcd153a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "11181780-6d8a-4342-839f-03e0cfcd153a" (UID: "11181780-6d8a-4342-839f-03e0cfcd153a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:02:32 crc kubenswrapper[4870]: I0216 17:02:32.621197 4870 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11181780-6d8a-4342-839f-03e0cfcd153a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:32 crc kubenswrapper[4870]: I0216 17:02:32.639217 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11181780-6d8a-4342-839f-03e0cfcd153a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "11181780-6d8a-4342-839f-03e0cfcd153a" (UID: "11181780-6d8a-4342-839f-03e0cfcd153a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:32 crc kubenswrapper[4870]: I0216 17:02:32.722747 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11181780-6d8a-4342-839f-03e0cfcd153a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:33 crc kubenswrapper[4870]: I0216 17:02:33.226846 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"18097b1b-64e3-437f-94f4-0e23ceae2bd4","Type":"ContainerStarted","Data":"a12a421e305996a7677dbef796e9a7f7f6a4df1413df2513d2f2d564068433dd"} Feb 16 17:02:33 crc kubenswrapper[4870]: I0216 17:02:33.230433 4870 generic.go:334] "Generic (PLEG): container finished" podID="b75fc0db-cff7-4c59-8019-e98bc08b1a0c" containerID="075354193be51a5e49558fa708844ae0e43e5c0f1368bd7f96651ec023971fdb" exitCode=0 Feb 16 17:02:33 crc kubenswrapper[4870]: I0216 17:02:33.230525 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jq6pt" event={"ID":"b75fc0db-cff7-4c59-8019-e98bc08b1a0c","Type":"ContainerDied","Data":"075354193be51a5e49558fa708844ae0e43e5c0f1368bd7f96651ec023971fdb"} Feb 16 17:02:33 crc kubenswrapper[4870]: I0216 17:02:33.232788 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 17:02:33 crc kubenswrapper[4870]: I0216 17:02:33.232827 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"11181780-6d8a-4342-839f-03e0cfcd153a","Type":"ContainerDied","Data":"7010cea158a9adfc7fe4ce29f4d86c30eab7b783184097fa83e27405cca295c3"} Feb 16 17:02:33 crc kubenswrapper[4870]: I0216 17:02:33.232851 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7010cea158a9adfc7fe4ce29f4d86c30eab7b783184097fa83e27405cca295c3" Feb 16 17:02:33 crc kubenswrapper[4870]: I0216 17:02:33.245749 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.245724524 podStartE2EDuration="3.245724524s" podCreationTimestamp="2026-02-16 17:02:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:33.243693057 +0000 UTC m=+157.727157461" watchObservedRunningTime="2026-02-16 17:02:33.245724524 +0000 UTC m=+157.729188908" Feb 16 17:02:33 crc kubenswrapper[4870]: I0216 17:02:33.434928 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:33 crc kubenswrapper[4870]: I0216 17:02:33.439924 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-cnjq9" Feb 16 17:02:33 crc kubenswrapper[4870]: I0216 17:02:33.601265 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:02:33 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Feb 16 17:02:33 crc kubenswrapper[4870]: [+]process-running ok Feb 16 17:02:33 crc kubenswrapper[4870]: healthz check failed Feb 16 17:02:33 crc kubenswrapper[4870]: I0216 17:02:33.601338 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:02:33 crc kubenswrapper[4870]: I0216 17:02:33.993669 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-k28wm" Feb 16 17:02:34 crc kubenswrapper[4870]: I0216 17:02:34.249848 4870 generic.go:334] "Generic (PLEG): container finished" podID="18097b1b-64e3-437f-94f4-0e23ceae2bd4" containerID="a12a421e305996a7677dbef796e9a7f7f6a4df1413df2513d2f2d564068433dd" exitCode=0 Feb 16 17:02:34 crc kubenswrapper[4870]: I0216 17:02:34.250158 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"18097b1b-64e3-437f-94f4-0e23ceae2bd4","Type":"ContainerDied","Data":"a12a421e305996a7677dbef796e9a7f7f6a4df1413df2513d2f2d564068433dd"} Feb 16 17:02:34 crc kubenswrapper[4870]: I0216 17:02:34.606102 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:02:34 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Feb 16 17:02:34 crc kubenswrapper[4870]: [+]process-running ok Feb 16 17:02:34 crc kubenswrapper[4870]: healthz check failed Feb 16 17:02:34 crc kubenswrapper[4870]: I0216 17:02:34.606221 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:02:35 crc kubenswrapper[4870]: I0216 17:02:35.367668 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:02:35 crc kubenswrapper[4870]: I0216 17:02:35.367756 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:02:35 crc kubenswrapper[4870]: I0216 17:02:35.567348 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 17:02:35 crc kubenswrapper[4870]: I0216 17:02:35.606223 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:02:35 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Feb 16 17:02:35 crc kubenswrapper[4870]: [+]process-running ok Feb 16 17:02:35 crc kubenswrapper[4870]: healthz check failed Feb 16 17:02:35 crc kubenswrapper[4870]: I0216 17:02:35.606301 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:02:35 crc kubenswrapper[4870]: I0216 17:02:35.717136 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18097b1b-64e3-437f-94f4-0e23ceae2bd4-kube-api-access\") pod \"18097b1b-64e3-437f-94f4-0e23ceae2bd4\" (UID: \"18097b1b-64e3-437f-94f4-0e23ceae2bd4\") " Feb 16 17:02:35 crc kubenswrapper[4870]: I0216 17:02:35.717312 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18097b1b-64e3-437f-94f4-0e23ceae2bd4-kubelet-dir\") pod \"18097b1b-64e3-437f-94f4-0e23ceae2bd4\" (UID: \"18097b1b-64e3-437f-94f4-0e23ceae2bd4\") " Feb 16 17:02:35 crc kubenswrapper[4870]: I0216 17:02:35.717641 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18097b1b-64e3-437f-94f4-0e23ceae2bd4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "18097b1b-64e3-437f-94f4-0e23ceae2bd4" (UID: "18097b1b-64e3-437f-94f4-0e23ceae2bd4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:02:35 crc kubenswrapper[4870]: I0216 17:02:35.730230 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18097b1b-64e3-437f-94f4-0e23ceae2bd4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "18097b1b-64e3-437f-94f4-0e23ceae2bd4" (UID: "18097b1b-64e3-437f-94f4-0e23ceae2bd4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:35 crc kubenswrapper[4870]: I0216 17:02:35.818877 4870 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18097b1b-64e3-437f-94f4-0e23ceae2bd4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:35 crc kubenswrapper[4870]: I0216 17:02:35.818922 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18097b1b-64e3-437f-94f4-0e23ceae2bd4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 17:02:36 crc kubenswrapper[4870]: I0216 17:02:36.290932 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"18097b1b-64e3-437f-94f4-0e23ceae2bd4","Type":"ContainerDied","Data":"7fcc460db3fc73ccd936723569096fd5596952a7f7f9700eb639f2bbd56bff6f"} Feb 16 17:02:36 crc kubenswrapper[4870]: I0216 17:02:36.291052 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fcc460db3fc73ccd936723569096fd5596952a7f7f9700eb639f2bbd56bff6f" Feb 16 17:02:36 crc kubenswrapper[4870]: I0216 17:02:36.291214 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 17:02:36 crc kubenswrapper[4870]: I0216 17:02:36.599731 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:02:36 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Feb 16 17:02:36 crc kubenswrapper[4870]: [+]process-running ok Feb 16 17:02:36 crc kubenswrapper[4870]: healthz check failed Feb 16 17:02:36 crc kubenswrapper[4870]: I0216 17:02:36.600123 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:02:37 crc kubenswrapper[4870]: I0216 17:02:37.600336 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:02:37 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Feb 16 17:02:37 crc kubenswrapper[4870]: [+]process-running ok Feb 16 17:02:37 crc kubenswrapper[4870]: healthz check failed Feb 16 17:02:37 crc kubenswrapper[4870]: I0216 17:02:37.600533 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:02:38 crc kubenswrapper[4870]: I0216 17:02:38.260938 4870 patch_prober.go:28] interesting pod/downloads-7954f5f757-fplj9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 17:02:38 crc kubenswrapper[4870]: I0216 17:02:38.260978 4870 patch_prober.go:28] interesting pod/downloads-7954f5f757-fplj9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 17:02:38 crc kubenswrapper[4870]: I0216 17:02:38.261047 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fplj9" podUID="6cb09d74-7044-4c8c-a89b-6bf4593ffb9d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 17:02:38 crc kubenswrapper[4870]: I0216 17:02:38.261055 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fplj9" podUID="6cb09d74-7044-4c8c-a89b-6bf4593ffb9d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 17:02:38 crc kubenswrapper[4870]: I0216 17:02:38.287143 4870 patch_prober.go:28] interesting pod/console-f9d7485db-n96b6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 16 17:02:38 crc kubenswrapper[4870]: I0216 17:02:38.287229 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-n96b6" podUID="ed053e72-4999-4b5d-a9f3-c58b92280c8c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 16 17:02:38 crc kubenswrapper[4870]: I0216 17:02:38.598836 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:02:38 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Feb 16 17:02:38 crc kubenswrapper[4870]: [+]process-running ok Feb 16 17:02:38 crc kubenswrapper[4870]: healthz check failed Feb 16 17:02:38 crc kubenswrapper[4870]: I0216 17:02:38.599171 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:02:39 crc kubenswrapper[4870]: I0216 17:02:39.590869 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs\") pod \"network-metrics-daemon-zsfxc\" (UID: \"d13b0b83-258a-4545-b358-e08252dbbe87\") " pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:02:39 crc kubenswrapper[4870]: I0216 17:02:39.596917 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:02:39 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Feb 16 17:02:39 crc kubenswrapper[4870]: [+]process-running ok Feb 16 17:02:39 crc kubenswrapper[4870]: healthz check failed Feb 16 17:02:39 crc kubenswrapper[4870]: I0216 17:02:39.597008 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:02:39 crc kubenswrapper[4870]: I0216 17:02:39.613706 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d13b0b83-258a-4545-b358-e08252dbbe87-metrics-certs\") pod \"network-metrics-daemon-zsfxc\" (UID: \"d13b0b83-258a-4545-b358-e08252dbbe87\") " pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:02:39 crc kubenswrapper[4870]: I0216 17:02:39.654799 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zsfxc" Feb 16 17:02:40 crc kubenswrapper[4870]: I0216 17:02:40.598683 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:02:40 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Feb 16 17:02:40 crc kubenswrapper[4870]: [+]process-running ok Feb 16 17:02:40 crc kubenswrapper[4870]: healthz check failed Feb 16 17:02:40 crc kubenswrapper[4870]: I0216 17:02:40.598770 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:02:41 crc kubenswrapper[4870]: I0216 17:02:41.597405 4870 patch_prober.go:28] interesting pod/router-default-5444994796-b66lf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:02:41 crc kubenswrapper[4870]: [-]has-synced failed: reason withheld Feb 16 17:02:41 crc kubenswrapper[4870]: [+]process-running ok Feb 16 17:02:41 crc kubenswrapper[4870]: healthz check failed Feb 16 17:02:41 crc kubenswrapper[4870]: I0216 17:02:41.597750 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b66lf" podUID="4524185d-72ed-4ff4-be99-2a01cf133dbc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:02:42 crc kubenswrapper[4870]: I0216 17:02:42.605109 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-b66lf" Feb 16 17:02:42 crc kubenswrapper[4870]: I0216 17:02:42.610208 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-b66lf" Feb 16 17:02:45 crc kubenswrapper[4870]: I0216 17:02:45.625347 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-zsfxc"] Feb 16 17:02:45 crc kubenswrapper[4870]: W0216 17:02:45.628821 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd13b0b83_258a_4545_b358_e08252dbbe87.slice/crio-51b92d022b830e2cef1ba564199d07b9af482eeaa2d93406dd35a694aaae2cd7 WatchSource:0}: Error finding container 51b92d022b830e2cef1ba564199d07b9af482eeaa2d93406dd35a694aaae2cd7: Status 404 returned error can't find the container with id 51b92d022b830e2cef1ba564199d07b9af482eeaa2d93406dd35a694aaae2cd7 Feb 16 17:02:46 crc kubenswrapper[4870]: I0216 17:02:46.464030 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" event={"ID":"d13b0b83-258a-4545-b358-e08252dbbe87","Type":"ContainerStarted","Data":"2990a485575fbca55c6655ad58d3d5feadb0d84b02c8ad6ff14bc4e6134c76b8"} Feb 16 17:02:46 crc kubenswrapper[4870]: I0216 17:02:46.464438 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" event={"ID":"d13b0b83-258a-4545-b358-e08252dbbe87","Type":"ContainerStarted","Data":"51b92d022b830e2cef1ba564199d07b9af482eeaa2d93406dd35a694aaae2cd7"} Feb 16 17:02:47 crc kubenswrapper[4870]: I0216 17:02:47.472327 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zsfxc" event={"ID":"d13b0b83-258a-4545-b358-e08252dbbe87","Type":"ContainerStarted","Data":"8aac9dd5b2fdaf32cced4417f6cd65756237cadf49b6d13807fe65104029bd3e"} Feb 16 17:02:47 crc kubenswrapper[4870]: I0216 17:02:47.500766 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-zsfxc" podStartSLOduration=151.500715018 podStartE2EDuration="2m31.500715018s" podCreationTimestamp="2026-02-16 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:02:47.498817614 +0000 UTC m=+171.982282008" watchObservedRunningTime="2026-02-16 17:02:47.500715018 +0000 UTC m=+171.984179422" Feb 16 17:02:48 crc kubenswrapper[4870]: I0216 17:02:48.263760 4870 patch_prober.go:28] interesting pod/downloads-7954f5f757-fplj9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 17:02:48 crc kubenswrapper[4870]: I0216 17:02:48.263835 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fplj9" podUID="6cb09d74-7044-4c8c-a89b-6bf4593ffb9d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 17:02:48 crc kubenswrapper[4870]: I0216 17:02:48.263888 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-fplj9" Feb 16 17:02:48 crc kubenswrapper[4870]: I0216 17:02:48.264751 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"960a254c60109051e21b55b262560f10ccc6e16bbe0dbbba601ecb6b242f45ac"} pod="openshift-console/downloads-7954f5f757-fplj9" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 16 17:02:48 crc kubenswrapper[4870]: I0216 17:02:48.264865 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-fplj9" podUID="6cb09d74-7044-4c8c-a89b-6bf4593ffb9d" containerName="download-server" containerID="cri-o://960a254c60109051e21b55b262560f10ccc6e16bbe0dbbba601ecb6b242f45ac" gracePeriod=2 Feb 16 17:02:48 crc kubenswrapper[4870]: I0216 17:02:48.265475 4870 patch_prober.go:28] interesting pod/downloads-7954f5f757-fplj9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 17:02:48 crc kubenswrapper[4870]: I0216 17:02:48.265542 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fplj9" podUID="6cb09d74-7044-4c8c-a89b-6bf4593ffb9d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 17:02:48 crc kubenswrapper[4870]: I0216 17:02:48.266398 4870 patch_prober.go:28] interesting pod/downloads-7954f5f757-fplj9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 17:02:48 crc kubenswrapper[4870]: I0216 17:02:48.266428 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fplj9" podUID="6cb09d74-7044-4c8c-a89b-6bf4593ffb9d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 17:02:48 crc kubenswrapper[4870]: I0216 17:02:48.284259 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:48 crc kubenswrapper[4870]: I0216 17:02:48.289146 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:02:49 crc kubenswrapper[4870]: I0216 17:02:49.486870 4870 generic.go:334] "Generic (PLEG): container finished" podID="6cb09d74-7044-4c8c-a89b-6bf4593ffb9d" containerID="960a254c60109051e21b55b262560f10ccc6e16bbe0dbbba601ecb6b242f45ac" exitCode=0 Feb 16 17:02:49 crc kubenswrapper[4870]: I0216 17:02:49.486996 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fplj9" event={"ID":"6cb09d74-7044-4c8c-a89b-6bf4593ffb9d","Type":"ContainerDied","Data":"960a254c60109051e21b55b262560f10ccc6e16bbe0dbbba601ecb6b242f45ac"} Feb 16 17:02:49 crc kubenswrapper[4870]: I0216 17:02:49.730057 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:02:58 crc kubenswrapper[4870]: I0216 17:02:58.261204 4870 patch_prober.go:28] interesting pod/downloads-7954f5f757-fplj9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 17:02:58 crc kubenswrapper[4870]: I0216 17:02:58.263164 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fplj9" podUID="6cb09d74-7044-4c8c-a89b-6bf4593ffb9d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 17:02:58 crc kubenswrapper[4870]: I0216 17:02:58.846838 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v25cw" Feb 16 17:03:04 crc kubenswrapper[4870]: E0216 17:03:04.120738 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 16 17:03:04 crc kubenswrapper[4870]: E0216 17:03:04.121148 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-skkfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-nr8mf_openshift-marketplace(588600bc-c342-4b4a-a755-0d8b541f0ca1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 17:03:04 crc kubenswrapper[4870]: E0216 17:03:04.122379 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-nr8mf" podUID="588600bc-c342-4b4a-a755-0d8b541f0ca1" Feb 16 17:03:04 crc kubenswrapper[4870]: I0216 17:03:04.665497 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 17:03:05 crc kubenswrapper[4870]: I0216 17:03:05.367496 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:03:05 crc kubenswrapper[4870]: I0216 17:03:05.367593 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:03:05 crc kubenswrapper[4870]: E0216 17:03:05.671685 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-nr8mf" podUID="588600bc-c342-4b4a-a755-0d8b541f0ca1" Feb 16 17:03:06 crc kubenswrapper[4870]: E0216 17:03:06.005456 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 16 17:03:06 crc kubenswrapper[4870]: E0216 17:03:06.005817 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-snpms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-wdq4b_openshift-marketplace(2b63cc22-5778-4805-b6fd-97f2ce43fda1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 17:03:06 crc kubenswrapper[4870]: E0216 17:03:06.007746 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-wdq4b" podUID="2b63cc22-5778-4805-b6fd-97f2ce43fda1" Feb 16 17:03:08 crc kubenswrapper[4870]: I0216 17:03:08.262174 4870 patch_prober.go:28] interesting pod/downloads-7954f5f757-fplj9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 17:03:08 crc kubenswrapper[4870]: I0216 17:03:08.262255 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fplj9" podUID="6cb09d74-7044-4c8c-a89b-6bf4593ffb9d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 17:03:08 crc kubenswrapper[4870]: E0216 17:03:08.726936 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-wdq4b" podUID="2b63cc22-5778-4805-b6fd-97f2ce43fda1" Feb 16 17:03:09 crc kubenswrapper[4870]: E0216 17:03:09.009790 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 16 17:03:09 crc kubenswrapper[4870]: E0216 17:03:09.009980 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plgt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-cdx44_openshift-marketplace(11e0dfd3-85a3-45b6-889d-31159c5a23cb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 17:03:09 crc kubenswrapper[4870]: E0216 17:03:09.019885 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-cdx44" podUID="11e0dfd3-85a3-45b6-889d-31159c5a23cb" Feb 16 17:03:09 crc kubenswrapper[4870]: E0216 17:03:09.242318 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 16 17:03:09 crc kubenswrapper[4870]: E0216 17:03:09.242505 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wv4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qrlg6_openshift-marketplace(c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 17:03:09 crc kubenswrapper[4870]: E0216 17:03:09.243677 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qrlg6" podUID="c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf" Feb 16 17:03:09 crc kubenswrapper[4870]: E0216 17:03:09.325441 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 16 17:03:09 crc kubenswrapper[4870]: E0216 17:03:09.325650 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b2ghk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-68qjg_openshift-marketplace(cfc3b15d-ea9f-4842-8d24-0af28f83153d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 17:03:09 crc kubenswrapper[4870]: E0216 17:03:09.326878 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-68qjg" podUID="cfc3b15d-ea9f-4842-8d24-0af28f83153d" Feb 16 17:03:10 crc kubenswrapper[4870]: E0216 17:03:10.317563 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-cdx44" podUID="11e0dfd3-85a3-45b6-889d-31159c5a23cb" Feb 16 17:03:10 crc kubenswrapper[4870]: E0216 17:03:10.317910 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-68qjg" podUID="cfc3b15d-ea9f-4842-8d24-0af28f83153d" Feb 16 17:03:10 crc kubenswrapper[4870]: E0216 17:03:10.317979 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qrlg6" podUID="c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf" Feb 16 17:03:10 crc kubenswrapper[4870]: I0216 17:03:10.367177 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 17:03:10 crc kubenswrapper[4870]: E0216 17:03:10.367416 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11181780-6d8a-4342-839f-03e0cfcd153a" containerName="pruner" Feb 16 17:03:10 crc kubenswrapper[4870]: I0216 17:03:10.367429 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="11181780-6d8a-4342-839f-03e0cfcd153a" containerName="pruner" Feb 16 17:03:10 crc kubenswrapper[4870]: E0216 17:03:10.367441 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18097b1b-64e3-437f-94f4-0e23ceae2bd4" containerName="pruner" Feb 16 17:03:10 crc kubenswrapper[4870]: I0216 17:03:10.367447 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="18097b1b-64e3-437f-94f4-0e23ceae2bd4" containerName="pruner" Feb 16 17:03:10 crc kubenswrapper[4870]: I0216 17:03:10.367542 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="18097b1b-64e3-437f-94f4-0e23ceae2bd4" containerName="pruner" Feb 16 17:03:10 crc kubenswrapper[4870]: I0216 17:03:10.367556 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="11181780-6d8a-4342-839f-03e0cfcd153a" containerName="pruner" Feb 16 17:03:10 crc kubenswrapper[4870]: I0216 17:03:10.376511 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 17:03:10 crc kubenswrapper[4870]: I0216 17:03:10.376629 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 17:03:10 crc kubenswrapper[4870]: I0216 17:03:10.379418 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 17:03:10 crc kubenswrapper[4870]: I0216 17:03:10.380377 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 17:03:10 crc kubenswrapper[4870]: E0216 17:03:10.413424 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 16 17:03:10 crc kubenswrapper[4870]: E0216 17:03:10.413586 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8czvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-w5rt4_openshift-marketplace(53e01b72-44e9-4f22-833e-9972542aca29): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 17:03:10 crc kubenswrapper[4870]: E0216 17:03:10.414768 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-w5rt4" podUID="53e01b72-44e9-4f22-833e-9972542aca29" Feb 16 17:03:10 crc kubenswrapper[4870]: E0216 17:03:10.415985 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 16 17:03:10 crc kubenswrapper[4870]: E0216 17:03:10.416068 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6gs4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tmfnv_openshift-marketplace(850617f1-446f-44e3-9a83-215215f95cbd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 17:03:10 crc kubenswrapper[4870]: E0216 17:03:10.417222 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tmfnv" podUID="850617f1-446f-44e3-9a83-215215f95cbd" Feb 16 17:03:10 crc kubenswrapper[4870]: E0216 17:03:10.457324 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 16 17:03:10 crc kubenswrapper[4870]: E0216 17:03:10.457477 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bcrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jq6pt_openshift-marketplace(b75fc0db-cff7-4c59-8019-e98bc08b1a0c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 17:03:10 crc kubenswrapper[4870]: E0216 17:03:10.458655 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-jq6pt" podUID="b75fc0db-cff7-4c59-8019-e98bc08b1a0c" Feb 16 17:03:10 crc kubenswrapper[4870]: I0216 17:03:10.488249 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b2fdbd-7a81-4854-adf4-b06d37ca080f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"66b2fdbd-7a81-4854-adf4-b06d37ca080f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 17:03:10 crc kubenswrapper[4870]: I0216 17:03:10.488485 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66b2fdbd-7a81-4854-adf4-b06d37ca080f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"66b2fdbd-7a81-4854-adf4-b06d37ca080f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 17:03:10 crc kubenswrapper[4870]: I0216 17:03:10.589536 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66b2fdbd-7a81-4854-adf4-b06d37ca080f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"66b2fdbd-7a81-4854-adf4-b06d37ca080f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 17:03:10 crc kubenswrapper[4870]: I0216 17:03:10.589652 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b2fdbd-7a81-4854-adf4-b06d37ca080f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"66b2fdbd-7a81-4854-adf4-b06d37ca080f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 17:03:10 crc kubenswrapper[4870]: I0216 17:03:10.589715 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66b2fdbd-7a81-4854-adf4-b06d37ca080f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"66b2fdbd-7a81-4854-adf4-b06d37ca080f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 17:03:10 crc kubenswrapper[4870]: I0216 17:03:10.608167 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b2fdbd-7a81-4854-adf4-b06d37ca080f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"66b2fdbd-7a81-4854-adf4-b06d37ca080f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 17:03:10 crc kubenswrapper[4870]: I0216 17:03:10.608850 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fplj9" event={"ID":"6cb09d74-7044-4c8c-a89b-6bf4593ffb9d","Type":"ContainerStarted","Data":"b87c80d74696dfef496971f1741ea6e00574c3db108326da758c52c16f421dff"} Feb 16 17:03:10 crc kubenswrapper[4870]: I0216 17:03:10.609720 4870 patch_prober.go:28] interesting pod/downloads-7954f5f757-fplj9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 17:03:10 crc kubenswrapper[4870]: I0216 17:03:10.609768 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fplj9" podUID="6cb09d74-7044-4c8c-a89b-6bf4593ffb9d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 17:03:10 crc kubenswrapper[4870]: E0216 17:03:10.610820 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tmfnv" podUID="850617f1-446f-44e3-9a83-215215f95cbd" Feb 16 17:03:10 crc kubenswrapper[4870]: E0216 17:03:10.611059 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-w5rt4" podUID="53e01b72-44e9-4f22-833e-9972542aca29" Feb 16 17:03:10 crc kubenswrapper[4870]: I0216 17:03:10.698257 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 17:03:11 crc kubenswrapper[4870]: I0216 17:03:11.087325 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 17:03:11 crc kubenswrapper[4870]: I0216 17:03:11.614848 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"66b2fdbd-7a81-4854-adf4-b06d37ca080f","Type":"ContainerStarted","Data":"aa177344cebcf825fd5be5f52ca365e77ef7f7b7f6c6afdc2cdcff8ff77709c2"} Feb 16 17:03:11 crc kubenswrapper[4870]: I0216 17:03:11.615184 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fplj9" Feb 16 17:03:11 crc kubenswrapper[4870]: I0216 17:03:11.615200 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"66b2fdbd-7a81-4854-adf4-b06d37ca080f","Type":"ContainerStarted","Data":"77acaa3f9509efb6349cc68e50c8ea2d69b7bf64fc4d48c37a8608b256a5bf9a"} Feb 16 17:03:11 crc kubenswrapper[4870]: I0216 17:03:11.615436 4870 patch_prober.go:28] interesting pod/downloads-7954f5f757-fplj9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 17:03:11 crc kubenswrapper[4870]: I0216 17:03:11.615493 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fplj9" podUID="6cb09d74-7044-4c8c-a89b-6bf4593ffb9d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 17:03:11 crc kubenswrapper[4870]: I0216 17:03:11.632118 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.63209961 podStartE2EDuration="1.63209961s" podCreationTimestamp="2026-02-16 17:03:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:03:11.628880138 +0000 UTC m=+196.112344522" watchObservedRunningTime="2026-02-16 17:03:11.63209961 +0000 UTC m=+196.115563994" Feb 16 17:03:12 crc kubenswrapper[4870]: I0216 17:03:12.623020 4870 generic.go:334] "Generic (PLEG): container finished" podID="66b2fdbd-7a81-4854-adf4-b06d37ca080f" containerID="aa177344cebcf825fd5be5f52ca365e77ef7f7b7f6c6afdc2cdcff8ff77709c2" exitCode=0 Feb 16 17:03:12 crc kubenswrapper[4870]: I0216 17:03:12.623448 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"66b2fdbd-7a81-4854-adf4-b06d37ca080f","Type":"ContainerDied","Data":"aa177344cebcf825fd5be5f52ca365e77ef7f7b7f6c6afdc2cdcff8ff77709c2"} Feb 16 17:03:12 crc kubenswrapper[4870]: I0216 17:03:12.624165 4870 patch_prober.go:28] interesting pod/downloads-7954f5f757-fplj9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 17:03:12 crc kubenswrapper[4870]: I0216 17:03:12.624203 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fplj9" podUID="6cb09d74-7044-4c8c-a89b-6bf4593ffb9d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 17:03:13 crc kubenswrapper[4870]: I0216 17:03:13.903311 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 17:03:14 crc kubenswrapper[4870]: I0216 17:03:14.033415 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66b2fdbd-7a81-4854-adf4-b06d37ca080f-kubelet-dir\") pod \"66b2fdbd-7a81-4854-adf4-b06d37ca080f\" (UID: \"66b2fdbd-7a81-4854-adf4-b06d37ca080f\") " Feb 16 17:03:14 crc kubenswrapper[4870]: I0216 17:03:14.033491 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b2fdbd-7a81-4854-adf4-b06d37ca080f-kube-api-access\") pod \"66b2fdbd-7a81-4854-adf4-b06d37ca080f\" (UID: \"66b2fdbd-7a81-4854-adf4-b06d37ca080f\") " Feb 16 17:03:14 crc kubenswrapper[4870]: I0216 17:03:14.033586 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66b2fdbd-7a81-4854-adf4-b06d37ca080f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "66b2fdbd-7a81-4854-adf4-b06d37ca080f" (UID: "66b2fdbd-7a81-4854-adf4-b06d37ca080f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:14 crc kubenswrapper[4870]: I0216 17:03:14.034835 4870 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66b2fdbd-7a81-4854-adf4-b06d37ca080f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:14 crc kubenswrapper[4870]: I0216 17:03:14.040174 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b2fdbd-7a81-4854-adf4-b06d37ca080f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "66b2fdbd-7a81-4854-adf4-b06d37ca080f" (UID: "66b2fdbd-7a81-4854-adf4-b06d37ca080f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:03:14 crc kubenswrapper[4870]: I0216 17:03:14.136857 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b2fdbd-7a81-4854-adf4-b06d37ca080f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:14 crc kubenswrapper[4870]: I0216 17:03:14.638743 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"66b2fdbd-7a81-4854-adf4-b06d37ca080f","Type":"ContainerDied","Data":"77acaa3f9509efb6349cc68e50c8ea2d69b7bf64fc4d48c37a8608b256a5bf9a"} Feb 16 17:03:14 crc kubenswrapper[4870]: I0216 17:03:14.639144 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77acaa3f9509efb6349cc68e50c8ea2d69b7bf64fc4d48c37a8608b256a5bf9a" Feb 16 17:03:14 crc kubenswrapper[4870]: I0216 17:03:14.638769 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 17:03:17 crc kubenswrapper[4870]: I0216 17:03:17.560375 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 17:03:17 crc kubenswrapper[4870]: E0216 17:03:17.560844 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b2fdbd-7a81-4854-adf4-b06d37ca080f" containerName="pruner" Feb 16 17:03:17 crc kubenswrapper[4870]: I0216 17:03:17.560856 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b2fdbd-7a81-4854-adf4-b06d37ca080f" containerName="pruner" Feb 16 17:03:17 crc kubenswrapper[4870]: I0216 17:03:17.560979 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="66b2fdbd-7a81-4854-adf4-b06d37ca080f" containerName="pruner" Feb 16 17:03:17 crc kubenswrapper[4870]: I0216 17:03:17.561362 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:03:17 crc kubenswrapper[4870]: I0216 17:03:17.563475 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 17:03:17 crc kubenswrapper[4870]: I0216 17:03:17.563854 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 17:03:17 crc kubenswrapper[4870]: I0216 17:03:17.570682 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 17:03:17 crc kubenswrapper[4870]: I0216 17:03:17.578578 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a-kube-api-access\") pod \"installer-9-crc\" (UID: \"fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:03:17 crc kubenswrapper[4870]: I0216 17:03:17.578640 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a-var-lock\") pod \"installer-9-crc\" (UID: \"fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:03:17 crc kubenswrapper[4870]: I0216 17:03:17.578682 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:03:17 crc kubenswrapper[4870]: I0216 17:03:17.679442 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:03:17 crc kubenswrapper[4870]: I0216 17:03:17.679493 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a-kube-api-access\") pod \"installer-9-crc\" (UID: \"fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:03:17 crc kubenswrapper[4870]: I0216 17:03:17.679541 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a-var-lock\") pod \"installer-9-crc\" (UID: \"fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:03:17 crc kubenswrapper[4870]: I0216 17:03:17.679598 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a-var-lock\") pod \"installer-9-crc\" (UID: \"fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:03:17 crc kubenswrapper[4870]: I0216 17:03:17.679631 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:03:17 crc kubenswrapper[4870]: I0216 17:03:17.701477 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a-kube-api-access\") pod \"installer-9-crc\" (UID: \"fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:03:17 crc kubenswrapper[4870]: I0216 17:03:17.878603 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:03:18 crc kubenswrapper[4870]: I0216 17:03:18.080789 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 17:03:18 crc kubenswrapper[4870]: I0216 17:03:18.283365 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-fplj9" Feb 16 17:03:18 crc kubenswrapper[4870]: I0216 17:03:18.658153 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a","Type":"ContainerStarted","Data":"d1b7d562b23ab624d2cc820dbd4491bd117ae564bb7b50e1df70ba2cc7e0f7df"} Feb 16 17:03:18 crc kubenswrapper[4870]: I0216 17:03:18.658256 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a","Type":"ContainerStarted","Data":"d1cf09728a5f607507bb3efcc5e5befc2856a901f08e3240d06574f448dcacc5"} Feb 16 17:03:18 crc kubenswrapper[4870]: I0216 17:03:18.679028 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.679008855 podStartE2EDuration="1.679008855s" podCreationTimestamp="2026-02-16 17:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:03:18.676371395 +0000 UTC m=+203.159835779" watchObservedRunningTime="2026-02-16 17:03:18.679008855 +0000 UTC m=+203.162473229" Feb 16 17:03:21 crc kubenswrapper[4870]: I0216 17:03:21.677649 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nr8mf" event={"ID":"588600bc-c342-4b4a-a755-0d8b541f0ca1","Type":"ContainerStarted","Data":"99ddf220e46558cf1cfb2a0e708d7546441a2b8ce1bdb373f86fe003aaabaae3"} Feb 16 17:03:22 crc kubenswrapper[4870]: I0216 17:03:22.683409 4870 generic.go:334] "Generic (PLEG): container finished" podID="2b63cc22-5778-4805-b6fd-97f2ce43fda1" containerID="ada51e5d9e585ef849093746d09212c8f23dc4bc5ed458ec072a5318a412829f" exitCode=0 Feb 16 17:03:22 crc kubenswrapper[4870]: I0216 17:03:22.683480 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdq4b" event={"ID":"2b63cc22-5778-4805-b6fd-97f2ce43fda1","Type":"ContainerDied","Data":"ada51e5d9e585ef849093746d09212c8f23dc4bc5ed458ec072a5318a412829f"} Feb 16 17:03:22 crc kubenswrapper[4870]: I0216 17:03:22.685479 4870 generic.go:334] "Generic (PLEG): container finished" podID="588600bc-c342-4b4a-a755-0d8b541f0ca1" containerID="99ddf220e46558cf1cfb2a0e708d7546441a2b8ce1bdb373f86fe003aaabaae3" exitCode=0 Feb 16 17:03:22 crc kubenswrapper[4870]: I0216 17:03:22.685514 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nr8mf" event={"ID":"588600bc-c342-4b4a-a755-0d8b541f0ca1","Type":"ContainerDied","Data":"99ddf220e46558cf1cfb2a0e708d7546441a2b8ce1bdb373f86fe003aaabaae3"} Feb 16 17:03:27 crc kubenswrapper[4870]: I0216 17:03:27.247097 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5ll8r"] Feb 16 17:03:33 crc kubenswrapper[4870]: I0216 17:03:33.738605 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jq6pt" event={"ID":"b75fc0db-cff7-4c59-8019-e98bc08b1a0c","Type":"ContainerStarted","Data":"ebf6295ee35a5f1d4e8245d8e4ba6afd92a33ed63d06acd4638b2ae189816322"} Feb 16 17:03:33 crc kubenswrapper[4870]: I0216 17:03:33.740470 4870 generic.go:334] "Generic (PLEG): container finished" podID="53e01b72-44e9-4f22-833e-9972542aca29" containerID="9db9050a91d47e0228a28605d0435c316dd012c2b734a88f0e2f219b4a3f4b3a" exitCode=0 Feb 16 17:03:33 crc kubenswrapper[4870]: I0216 17:03:33.740554 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5rt4" event={"ID":"53e01b72-44e9-4f22-833e-9972542aca29","Type":"ContainerDied","Data":"9db9050a91d47e0228a28605d0435c316dd012c2b734a88f0e2f219b4a3f4b3a"} Feb 16 17:03:33 crc kubenswrapper[4870]: I0216 17:03:33.742296 4870 generic.go:334] "Generic (PLEG): container finished" podID="cfc3b15d-ea9f-4842-8d24-0af28f83153d" containerID="6da8da76ed26c782e5e383f5d518184bc3c1090841d638246ad498d0b280627a" exitCode=0 Feb 16 17:03:33 crc kubenswrapper[4870]: I0216 17:03:33.742400 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68qjg" event={"ID":"cfc3b15d-ea9f-4842-8d24-0af28f83153d","Type":"ContainerDied","Data":"6da8da76ed26c782e5e383f5d518184bc3c1090841d638246ad498d0b280627a"} Feb 16 17:03:33 crc kubenswrapper[4870]: I0216 17:03:33.747546 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdq4b" event={"ID":"2b63cc22-5778-4805-b6fd-97f2ce43fda1","Type":"ContainerStarted","Data":"cd9e6630f40a66fe711a86a4a84de93ccefa6433cc5cae058a71370e4e6134de"} Feb 16 17:03:33 crc kubenswrapper[4870]: I0216 17:03:33.750290 4870 generic.go:334] "Generic (PLEG): container finished" podID="c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf" containerID="49404836917e7e7eb2291b66763d2b5422d29a2a9140cc2a131f5b1aeac33fec" exitCode=0 Feb 16 17:03:33 crc kubenswrapper[4870]: I0216 17:03:33.750368 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qrlg6" event={"ID":"c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf","Type":"ContainerDied","Data":"49404836917e7e7eb2291b66763d2b5422d29a2a9140cc2a131f5b1aeac33fec"} Feb 16 17:03:33 crc kubenswrapper[4870]: I0216 17:03:33.752335 4870 generic.go:334] "Generic (PLEG): container finished" podID="850617f1-446f-44e3-9a83-215215f95cbd" containerID="2b44010d363ef4106f93687a9a8c52e639b7d84ad7d084696efbb0a60fefbfa2" exitCode=0 Feb 16 17:03:33 crc kubenswrapper[4870]: I0216 17:03:33.752400 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmfnv" event={"ID":"850617f1-446f-44e3-9a83-215215f95cbd","Type":"ContainerDied","Data":"2b44010d363ef4106f93687a9a8c52e639b7d84ad7d084696efbb0a60fefbfa2"} Feb 16 17:03:33 crc kubenswrapper[4870]: I0216 17:03:33.755997 4870 generic.go:334] "Generic (PLEG): container finished" podID="11e0dfd3-85a3-45b6-889d-31159c5a23cb" containerID="e744686d7f4aad132f77b8a00b47f245b1df26243168fb90b1c56f1e657211d9" exitCode=0 Feb 16 17:03:33 crc kubenswrapper[4870]: I0216 17:03:33.756048 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdx44" event={"ID":"11e0dfd3-85a3-45b6-889d-31159c5a23cb","Type":"ContainerDied","Data":"e744686d7f4aad132f77b8a00b47f245b1df26243168fb90b1c56f1e657211d9"} Feb 16 17:03:33 crc kubenswrapper[4870]: I0216 17:03:33.758989 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nr8mf" event={"ID":"588600bc-c342-4b4a-a755-0d8b541f0ca1","Type":"ContainerStarted","Data":"142dbe04c376542b582e1f6d02093f4e59951e77a27f7f1dbd4424dffc087c6b"} Feb 16 17:03:33 crc kubenswrapper[4870]: I0216 17:03:33.862618 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nr8mf" podStartSLOduration=8.393996005 podStartE2EDuration="1m3.862603812s" podCreationTimestamp="2026-02-16 17:02:30 +0000 UTC" firstStartedPulling="2026-02-16 17:02:32.165720184 +0000 UTC m=+156.649184568" lastFinishedPulling="2026-02-16 17:03:27.634327991 +0000 UTC m=+212.117792375" observedRunningTime="2026-02-16 17:03:33.861056255 +0000 UTC m=+218.344520639" watchObservedRunningTime="2026-02-16 17:03:33.862603812 +0000 UTC m=+218.346068196" Feb 16 17:03:33 crc kubenswrapper[4870]: I0216 17:03:33.898897 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wdq4b" podStartSLOduration=4.401861683 podStartE2EDuration="1m6.898878325s" podCreationTimestamp="2026-02-16 17:02:27 +0000 UTC" firstStartedPulling="2026-02-16 17:02:30.060838839 +0000 UTC m=+154.544303223" lastFinishedPulling="2026-02-16 17:03:32.557855471 +0000 UTC m=+217.041319865" observedRunningTime="2026-02-16 17:03:33.897776541 +0000 UTC m=+218.381240945" watchObservedRunningTime="2026-02-16 17:03:33.898878325 +0000 UTC m=+218.382342709" Feb 16 17:03:34 crc kubenswrapper[4870]: I0216 17:03:34.771805 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5rt4" event={"ID":"53e01b72-44e9-4f22-833e-9972542aca29","Type":"ContainerStarted","Data":"1752a86a8fe7087a863ea012c6297a318b5aaca5bd187be84f4152af9592b0df"} Feb 16 17:03:34 crc kubenswrapper[4870]: I0216 17:03:34.774495 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68qjg" event={"ID":"cfc3b15d-ea9f-4842-8d24-0af28f83153d","Type":"ContainerStarted","Data":"e67eb242b9a4e16cf28e9e08913657bb9191af8d4075bb5c9384bed824b5c5a4"} Feb 16 17:03:34 crc kubenswrapper[4870]: I0216 17:03:34.776796 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qrlg6" event={"ID":"c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf","Type":"ContainerStarted","Data":"adecc2af7271c47ec968f6f90eca9456bc10b12a5ca533a5bdb20b48cf321769"} Feb 16 17:03:34 crc kubenswrapper[4870]: I0216 17:03:34.778974 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmfnv" event={"ID":"850617f1-446f-44e3-9a83-215215f95cbd","Type":"ContainerStarted","Data":"37bcfc67e9205886eb20534d5c894900c2e60dfa04d5b6edaa21d136a8cc8b42"} Feb 16 17:03:34 crc kubenswrapper[4870]: I0216 17:03:34.781217 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdx44" event={"ID":"11e0dfd3-85a3-45b6-889d-31159c5a23cb","Type":"ContainerStarted","Data":"1e5ca1a8d9c93226039adbaa9c16cc11a34f4c36056354dd1f0b7ddcd7616c1e"} Feb 16 17:03:34 crc kubenswrapper[4870]: I0216 17:03:34.783253 4870 generic.go:334] "Generic (PLEG): container finished" podID="b75fc0db-cff7-4c59-8019-e98bc08b1a0c" containerID="ebf6295ee35a5f1d4e8245d8e4ba6afd92a33ed63d06acd4638b2ae189816322" exitCode=0 Feb 16 17:03:34 crc kubenswrapper[4870]: I0216 17:03:34.783399 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jq6pt" event={"ID":"b75fc0db-cff7-4c59-8019-e98bc08b1a0c","Type":"ContainerDied","Data":"ebf6295ee35a5f1d4e8245d8e4ba6afd92a33ed63d06acd4638b2ae189816322"} Feb 16 17:03:34 crc kubenswrapper[4870]: I0216 17:03:34.797751 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w5rt4" podStartSLOduration=3.704660899 podStartE2EDuration="1m7.797730184s" podCreationTimestamp="2026-02-16 17:02:27 +0000 UTC" firstStartedPulling="2026-02-16 17:02:30.007355587 +0000 UTC m=+154.490819961" lastFinishedPulling="2026-02-16 17:03:34.100424852 +0000 UTC m=+218.583889246" observedRunningTime="2026-02-16 17:03:34.797464326 +0000 UTC m=+219.280928710" watchObservedRunningTime="2026-02-16 17:03:34.797730184 +0000 UTC m=+219.281194568" Feb 16 17:03:34 crc kubenswrapper[4870]: I0216 17:03:34.820493 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tmfnv" podStartSLOduration=3.487060465 podStartE2EDuration="1m7.820472615s" podCreationTimestamp="2026-02-16 17:02:27 +0000 UTC" firstStartedPulling="2026-02-16 17:02:29.954436891 +0000 UTC m=+154.437901275" lastFinishedPulling="2026-02-16 17:03:34.287849041 +0000 UTC m=+218.771313425" observedRunningTime="2026-02-16 17:03:34.816562656 +0000 UTC m=+219.300027030" watchObservedRunningTime="2026-02-16 17:03:34.820472615 +0000 UTC m=+219.303936999" Feb 16 17:03:34 crc kubenswrapper[4870]: I0216 17:03:34.850764 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-68qjg" podStartSLOduration=3.6917885889999997 podStartE2EDuration="1m7.850744826s" podCreationTimestamp="2026-02-16 17:02:27 +0000 UTC" firstStartedPulling="2026-02-16 17:02:30.047569591 +0000 UTC m=+154.531033975" lastFinishedPulling="2026-02-16 17:03:34.206525828 +0000 UTC m=+218.689990212" observedRunningTime="2026-02-16 17:03:34.847136016 +0000 UTC m=+219.330600400" watchObservedRunningTime="2026-02-16 17:03:34.850744826 +0000 UTC m=+219.334209210" Feb 16 17:03:34 crc kubenswrapper[4870]: I0216 17:03:34.868068 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qrlg6" podStartSLOduration=2.858949226 podStartE2EDuration="1m5.868041932s" podCreationTimestamp="2026-02-16 17:02:29 +0000 UTC" firstStartedPulling="2026-02-16 17:02:31.150922828 +0000 UTC m=+155.634387212" lastFinishedPulling="2026-02-16 17:03:34.160015544 +0000 UTC m=+218.643479918" observedRunningTime="2026-02-16 17:03:34.86567279 +0000 UTC m=+219.349137174" watchObservedRunningTime="2026-02-16 17:03:34.868041932 +0000 UTC m=+219.351506316" Feb 16 17:03:34 crc kubenswrapper[4870]: I0216 17:03:34.886898 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cdx44" podStartSLOduration=2.785384911 podStartE2EDuration="1m5.886878254s" podCreationTimestamp="2026-02-16 17:02:29 +0000 UTC" firstStartedPulling="2026-02-16 17:02:31.098355852 +0000 UTC m=+155.581820236" lastFinishedPulling="2026-02-16 17:03:34.199849205 +0000 UTC m=+218.683313579" observedRunningTime="2026-02-16 17:03:34.886223134 +0000 UTC m=+219.369687538" watchObservedRunningTime="2026-02-16 17:03:34.886878254 +0000 UTC m=+219.370342638" Feb 16 17:03:35 crc kubenswrapper[4870]: I0216 17:03:35.366971 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:03:35 crc kubenswrapper[4870]: I0216 17:03:35.367040 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:03:35 crc kubenswrapper[4870]: I0216 17:03:35.367088 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:03:35 crc kubenswrapper[4870]: I0216 17:03:35.367670 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b"} pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:03:35 crc kubenswrapper[4870]: I0216 17:03:35.367726 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" containerID="cri-o://5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b" gracePeriod=600 Feb 16 17:03:35 crc kubenswrapper[4870]: I0216 17:03:35.789928 4870 generic.go:334] "Generic (PLEG): container finished" podID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerID="5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b" exitCode=0 Feb 16 17:03:35 crc kubenswrapper[4870]: I0216 17:03:35.789985 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerDied","Data":"5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b"} Feb 16 17:03:35 crc kubenswrapper[4870]: I0216 17:03:35.790044 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerStarted","Data":"563d08ece6d8d03837c0e89113bb97e1e95888579fc4a7e6ea7811bf1591b1d0"} Feb 16 17:03:35 crc kubenswrapper[4870]: I0216 17:03:35.791797 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jq6pt" event={"ID":"b75fc0db-cff7-4c59-8019-e98bc08b1a0c","Type":"ContainerStarted","Data":"bd6312d267d63386bfd46c2d68ce6b9b9cea34cd2f4bf45fdb9847c8eaf16fee"} Feb 16 17:03:37 crc kubenswrapper[4870]: I0216 17:03:37.667754 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w5rt4" Feb 16 17:03:37 crc kubenswrapper[4870]: I0216 17:03:37.668324 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w5rt4" Feb 16 17:03:37 crc kubenswrapper[4870]: I0216 17:03:37.971521 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wdq4b" Feb 16 17:03:37 crc kubenswrapper[4870]: I0216 17:03:37.971588 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wdq4b" Feb 16 17:03:38 crc kubenswrapper[4870]: I0216 17:03:38.040060 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w5rt4" Feb 16 17:03:38 crc kubenswrapper[4870]: I0216 17:03:38.041387 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wdq4b" Feb 16 17:03:38 crc kubenswrapper[4870]: I0216 17:03:38.072445 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jq6pt" podStartSLOduration=6.136548987 podStartE2EDuration="1m7.072423441s" podCreationTimestamp="2026-02-16 17:02:31 +0000 UTC" firstStartedPulling="2026-02-16 17:02:34.254975003 +0000 UTC m=+158.738439387" lastFinishedPulling="2026-02-16 17:03:35.190849457 +0000 UTC m=+219.674313841" observedRunningTime="2026-02-16 17:03:35.824813623 +0000 UTC m=+220.308278017" watchObservedRunningTime="2026-02-16 17:03:38.072423441 +0000 UTC m=+222.555887835" Feb 16 17:03:38 crc kubenswrapper[4870]: I0216 17:03:38.189015 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tmfnv" Feb 16 17:03:38 crc kubenswrapper[4870]: I0216 17:03:38.189077 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tmfnv" Feb 16 17:03:38 crc kubenswrapper[4870]: I0216 17:03:38.233360 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tmfnv" Feb 16 17:03:38 crc kubenswrapper[4870]: I0216 17:03:38.386203 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-68qjg" Feb 16 17:03:38 crc kubenswrapper[4870]: I0216 17:03:38.386258 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-68qjg" Feb 16 17:03:38 crc kubenswrapper[4870]: I0216 17:03:38.422714 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-68qjg" Feb 16 17:03:38 crc kubenswrapper[4870]: I0216 17:03:38.850406 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wdq4b" Feb 16 17:03:39 crc kubenswrapper[4870]: I0216 17:03:39.603680 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qrlg6" Feb 16 17:03:39 crc kubenswrapper[4870]: I0216 17:03:39.603741 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qrlg6" Feb 16 17:03:39 crc kubenswrapper[4870]: I0216 17:03:39.677851 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qrlg6" Feb 16 17:03:39 crc kubenswrapper[4870]: I0216 17:03:39.849650 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qrlg6" Feb 16 17:03:40 crc kubenswrapper[4870]: I0216 17:03:40.317539 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cdx44" Feb 16 17:03:40 crc kubenswrapper[4870]: I0216 17:03:40.317616 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cdx44" Feb 16 17:03:40 crc kubenswrapper[4870]: I0216 17:03:40.369796 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cdx44" Feb 16 17:03:40 crc kubenswrapper[4870]: I0216 17:03:40.857560 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cdx44" Feb 16 17:03:41 crc kubenswrapper[4870]: I0216 17:03:41.084052 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nr8mf" Feb 16 17:03:41 crc kubenswrapper[4870]: I0216 17:03:41.084107 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nr8mf" Feb 16 17:03:41 crc kubenswrapper[4870]: I0216 17:03:41.130736 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nr8mf" Feb 16 17:03:41 crc kubenswrapper[4870]: I0216 17:03:41.475807 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jq6pt" Feb 16 17:03:41 crc kubenswrapper[4870]: I0216 17:03:41.475858 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jq6pt" Feb 16 17:03:41 crc kubenswrapper[4870]: I0216 17:03:41.539902 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jq6pt" Feb 16 17:03:41 crc kubenswrapper[4870]: I0216 17:03:41.860662 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nr8mf" Feb 16 17:03:41 crc kubenswrapper[4870]: I0216 17:03:41.860865 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jq6pt" Feb 16 17:03:44 crc kubenswrapper[4870]: I0216 17:03:44.057793 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdx44"] Feb 16 17:03:44 crc kubenswrapper[4870]: I0216 17:03:44.058301 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cdx44" podUID="11e0dfd3-85a3-45b6-889d-31159c5a23cb" containerName="registry-server" containerID="cri-o://1e5ca1a8d9c93226039adbaa9c16cc11a34f4c36056354dd1f0b7ddcd7616c1e" gracePeriod=2 Feb 16 17:03:44 crc kubenswrapper[4870]: I0216 17:03:44.887848 4870 generic.go:334] "Generic (PLEG): container finished" podID="11e0dfd3-85a3-45b6-889d-31159c5a23cb" containerID="1e5ca1a8d9c93226039adbaa9c16cc11a34f4c36056354dd1f0b7ddcd7616c1e" exitCode=0 Feb 16 17:03:44 crc kubenswrapper[4870]: I0216 17:03:44.887902 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdx44" event={"ID":"11e0dfd3-85a3-45b6-889d-31159c5a23cb","Type":"ContainerDied","Data":"1e5ca1a8d9c93226039adbaa9c16cc11a34f4c36056354dd1f0b7ddcd7616c1e"} Feb 16 17:03:44 crc kubenswrapper[4870]: I0216 17:03:44.888346 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdx44" event={"ID":"11e0dfd3-85a3-45b6-889d-31159c5a23cb","Type":"ContainerDied","Data":"95335fc09dfdce2b51d0d5dca69e64765be5a72c92668312f33e8944d05d7ce5"} Feb 16 17:03:44 crc kubenswrapper[4870]: I0216 17:03:44.888365 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95335fc09dfdce2b51d0d5dca69e64765be5a72c92668312f33e8944d05d7ce5" Feb 16 17:03:44 crc kubenswrapper[4870]: I0216 17:03:44.910989 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cdx44" Feb 16 17:03:45 crc kubenswrapper[4870]: I0216 17:03:45.096841 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11e0dfd3-85a3-45b6-889d-31159c5a23cb-utilities\") pod \"11e0dfd3-85a3-45b6-889d-31159c5a23cb\" (UID: \"11e0dfd3-85a3-45b6-889d-31159c5a23cb\") " Feb 16 17:03:45 crc kubenswrapper[4870]: I0216 17:03:45.096923 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plgt6\" (UniqueName: \"kubernetes.io/projected/11e0dfd3-85a3-45b6-889d-31159c5a23cb-kube-api-access-plgt6\") pod \"11e0dfd3-85a3-45b6-889d-31159c5a23cb\" (UID: \"11e0dfd3-85a3-45b6-889d-31159c5a23cb\") " Feb 16 17:03:45 crc kubenswrapper[4870]: I0216 17:03:45.097000 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11e0dfd3-85a3-45b6-889d-31159c5a23cb-catalog-content\") pod \"11e0dfd3-85a3-45b6-889d-31159c5a23cb\" (UID: \"11e0dfd3-85a3-45b6-889d-31159c5a23cb\") " Feb 16 17:03:45 crc kubenswrapper[4870]: I0216 17:03:45.097792 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e0dfd3-85a3-45b6-889d-31159c5a23cb-utilities" (OuterVolumeSpecName: "utilities") pod "11e0dfd3-85a3-45b6-889d-31159c5a23cb" (UID: "11e0dfd3-85a3-45b6-889d-31159c5a23cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:03:45 crc kubenswrapper[4870]: I0216 17:03:45.106238 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e0dfd3-85a3-45b6-889d-31159c5a23cb-kube-api-access-plgt6" (OuterVolumeSpecName: "kube-api-access-plgt6") pod "11e0dfd3-85a3-45b6-889d-31159c5a23cb" (UID: "11e0dfd3-85a3-45b6-889d-31159c5a23cb"). InnerVolumeSpecName "kube-api-access-plgt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:03:45 crc kubenswrapper[4870]: I0216 17:03:45.124602 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e0dfd3-85a3-45b6-889d-31159c5a23cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11e0dfd3-85a3-45b6-889d-31159c5a23cb" (UID: "11e0dfd3-85a3-45b6-889d-31159c5a23cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:03:45 crc kubenswrapper[4870]: I0216 17:03:45.198358 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11e0dfd3-85a3-45b6-889d-31159c5a23cb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:45 crc kubenswrapper[4870]: I0216 17:03:45.198411 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11e0dfd3-85a3-45b6-889d-31159c5a23cb-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:45 crc kubenswrapper[4870]: I0216 17:03:45.198424 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plgt6\" (UniqueName: \"kubernetes.io/projected/11e0dfd3-85a3-45b6-889d-31159c5a23cb-kube-api-access-plgt6\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:45 crc kubenswrapper[4870]: I0216 17:03:45.893351 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cdx44" Feb 16 17:03:45 crc kubenswrapper[4870]: I0216 17:03:45.919644 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdx44"] Feb 16 17:03:45 crc kubenswrapper[4870]: I0216 17:03:45.922512 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdx44"] Feb 16 17:03:46 crc kubenswrapper[4870]: I0216 17:03:46.232904 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11e0dfd3-85a3-45b6-889d-31159c5a23cb" path="/var/lib/kubelet/pods/11e0dfd3-85a3-45b6-889d-31159c5a23cb/volumes" Feb 16 17:03:46 crc kubenswrapper[4870]: I0216 17:03:46.457510 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jq6pt"] Feb 16 17:03:46 crc kubenswrapper[4870]: I0216 17:03:46.457781 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jq6pt" podUID="b75fc0db-cff7-4c59-8019-e98bc08b1a0c" containerName="registry-server" containerID="cri-o://bd6312d267d63386bfd46c2d68ce6b9b9cea34cd2f4bf45fdb9847c8eaf16fee" gracePeriod=2 Feb 16 17:03:47 crc kubenswrapper[4870]: I0216 17:03:47.708293 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w5rt4" Feb 16 17:03:47 crc kubenswrapper[4870]: I0216 17:03:47.906714 4870 generic.go:334] "Generic (PLEG): container finished" podID="b75fc0db-cff7-4c59-8019-e98bc08b1a0c" containerID="bd6312d267d63386bfd46c2d68ce6b9b9cea34cd2f4bf45fdb9847c8eaf16fee" exitCode=0 Feb 16 17:03:47 crc kubenswrapper[4870]: I0216 17:03:47.906763 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jq6pt" event={"ID":"b75fc0db-cff7-4c59-8019-e98bc08b1a0c","Type":"ContainerDied","Data":"bd6312d267d63386bfd46c2d68ce6b9b9cea34cd2f4bf45fdb9847c8eaf16fee"} Feb 16 17:03:48 crc kubenswrapper[4870]: I0216 17:03:48.012225 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jq6pt" Feb 16 17:03:48 crc kubenswrapper[4870]: I0216 17:03:48.135375 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b75fc0db-cff7-4c59-8019-e98bc08b1a0c-catalog-content\") pod \"b75fc0db-cff7-4c59-8019-e98bc08b1a0c\" (UID: \"b75fc0db-cff7-4c59-8019-e98bc08b1a0c\") " Feb 16 17:03:48 crc kubenswrapper[4870]: I0216 17:03:48.135468 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bcrx\" (UniqueName: \"kubernetes.io/projected/b75fc0db-cff7-4c59-8019-e98bc08b1a0c-kube-api-access-8bcrx\") pod \"b75fc0db-cff7-4c59-8019-e98bc08b1a0c\" (UID: \"b75fc0db-cff7-4c59-8019-e98bc08b1a0c\") " Feb 16 17:03:48 crc kubenswrapper[4870]: I0216 17:03:48.135555 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b75fc0db-cff7-4c59-8019-e98bc08b1a0c-utilities\") pod \"b75fc0db-cff7-4c59-8019-e98bc08b1a0c\" (UID: \"b75fc0db-cff7-4c59-8019-e98bc08b1a0c\") " Feb 16 17:03:48 crc kubenswrapper[4870]: I0216 17:03:48.136395 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b75fc0db-cff7-4c59-8019-e98bc08b1a0c-utilities" (OuterVolumeSpecName: "utilities") pod "b75fc0db-cff7-4c59-8019-e98bc08b1a0c" (UID: "b75fc0db-cff7-4c59-8019-e98bc08b1a0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:03:48 crc kubenswrapper[4870]: I0216 17:03:48.142892 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b75fc0db-cff7-4c59-8019-e98bc08b1a0c-kube-api-access-8bcrx" (OuterVolumeSpecName: "kube-api-access-8bcrx") pod "b75fc0db-cff7-4c59-8019-e98bc08b1a0c" (UID: "b75fc0db-cff7-4c59-8019-e98bc08b1a0c"). InnerVolumeSpecName "kube-api-access-8bcrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:03:48 crc kubenswrapper[4870]: I0216 17:03:48.237991 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bcrx\" (UniqueName: \"kubernetes.io/projected/b75fc0db-cff7-4c59-8019-e98bc08b1a0c-kube-api-access-8bcrx\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:48 crc kubenswrapper[4870]: I0216 17:03:48.238069 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b75fc0db-cff7-4c59-8019-e98bc08b1a0c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:48 crc kubenswrapper[4870]: I0216 17:03:48.249300 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tmfnv" Feb 16 17:03:48 crc kubenswrapper[4870]: I0216 17:03:48.414630 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b75fc0db-cff7-4c59-8019-e98bc08b1a0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b75fc0db-cff7-4c59-8019-e98bc08b1a0c" (UID: "b75fc0db-cff7-4c59-8019-e98bc08b1a0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:03:48 crc kubenswrapper[4870]: I0216 17:03:48.427329 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-68qjg" Feb 16 17:03:48 crc kubenswrapper[4870]: I0216 17:03:48.440569 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b75fc0db-cff7-4c59-8019-e98bc08b1a0c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:48 crc kubenswrapper[4870]: I0216 17:03:48.914544 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jq6pt" event={"ID":"b75fc0db-cff7-4c59-8019-e98bc08b1a0c","Type":"ContainerDied","Data":"5b738b294c7bb2c5d26f0abe8d75dd591bcaf38bcc12389126782cd3ea391316"} Feb 16 17:03:48 crc kubenswrapper[4870]: I0216 17:03:48.914601 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jq6pt" Feb 16 17:03:48 crc kubenswrapper[4870]: I0216 17:03:48.914611 4870 scope.go:117] "RemoveContainer" containerID="bd6312d267d63386bfd46c2d68ce6b9b9cea34cd2f4bf45fdb9847c8eaf16fee" Feb 16 17:03:48 crc kubenswrapper[4870]: I0216 17:03:48.936612 4870 scope.go:117] "RemoveContainer" containerID="ebf6295ee35a5f1d4e8245d8e4ba6afd92a33ed63d06acd4638b2ae189816322" Feb 16 17:03:48 crc kubenswrapper[4870]: I0216 17:03:48.946533 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jq6pt"] Feb 16 17:03:48 crc kubenswrapper[4870]: I0216 17:03:48.951420 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jq6pt"] Feb 16 17:03:48 crc kubenswrapper[4870]: I0216 17:03:48.961935 4870 scope.go:117] "RemoveContainer" containerID="075354193be51a5e49558fa708844ae0e43e5c0f1368bd7f96651ec023971fdb" Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.051620 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-68qjg"] Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.051846 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-68qjg" podUID="cfc3b15d-ea9f-4842-8d24-0af28f83153d" containerName="registry-server" containerID="cri-o://e67eb242b9a4e16cf28e9e08913657bb9191af8d4075bb5c9384bed824b5c5a4" gracePeriod=2 Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.424580 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68qjg" Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.552810 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfc3b15d-ea9f-4842-8d24-0af28f83153d-catalog-content\") pod \"cfc3b15d-ea9f-4842-8d24-0af28f83153d\" (UID: \"cfc3b15d-ea9f-4842-8d24-0af28f83153d\") " Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.552879 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfc3b15d-ea9f-4842-8d24-0af28f83153d-utilities\") pod \"cfc3b15d-ea9f-4842-8d24-0af28f83153d\" (UID: \"cfc3b15d-ea9f-4842-8d24-0af28f83153d\") " Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.553002 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2ghk\" (UniqueName: \"kubernetes.io/projected/cfc3b15d-ea9f-4842-8d24-0af28f83153d-kube-api-access-b2ghk\") pod \"cfc3b15d-ea9f-4842-8d24-0af28f83153d\" (UID: \"cfc3b15d-ea9f-4842-8d24-0af28f83153d\") " Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.553807 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfc3b15d-ea9f-4842-8d24-0af28f83153d-utilities" (OuterVolumeSpecName: "utilities") pod "cfc3b15d-ea9f-4842-8d24-0af28f83153d" (UID: "cfc3b15d-ea9f-4842-8d24-0af28f83153d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.557053 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfc3b15d-ea9f-4842-8d24-0af28f83153d-kube-api-access-b2ghk" (OuterVolumeSpecName: "kube-api-access-b2ghk") pod "cfc3b15d-ea9f-4842-8d24-0af28f83153d" (UID: "cfc3b15d-ea9f-4842-8d24-0af28f83153d"). InnerVolumeSpecName "kube-api-access-b2ghk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.601774 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfc3b15d-ea9f-4842-8d24-0af28f83153d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfc3b15d-ea9f-4842-8d24-0af28f83153d" (UID: "cfc3b15d-ea9f-4842-8d24-0af28f83153d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.655053 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfc3b15d-ea9f-4842-8d24-0af28f83153d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.655113 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2ghk\" (UniqueName: \"kubernetes.io/projected/cfc3b15d-ea9f-4842-8d24-0af28f83153d-kube-api-access-b2ghk\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.655125 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfc3b15d-ea9f-4842-8d24-0af28f83153d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.922843 4870 generic.go:334] "Generic (PLEG): container finished" podID="cfc3b15d-ea9f-4842-8d24-0af28f83153d" containerID="e67eb242b9a4e16cf28e9e08913657bb9191af8d4075bb5c9384bed824b5c5a4" exitCode=0 Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.922878 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68qjg" event={"ID":"cfc3b15d-ea9f-4842-8d24-0af28f83153d","Type":"ContainerDied","Data":"e67eb242b9a4e16cf28e9e08913657bb9191af8d4075bb5c9384bed824b5c5a4"} Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.922898 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68qjg" event={"ID":"cfc3b15d-ea9f-4842-8d24-0af28f83153d","Type":"ContainerDied","Data":"b2774b6b11a661f9b5110b05e857fa9ed2e2be90c8bcb346f1bf5e2c850f8c05"} Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.922916 4870 scope.go:117] "RemoveContainer" containerID="e67eb242b9a4e16cf28e9e08913657bb9191af8d4075bb5c9384bed824b5c5a4" Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.922940 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68qjg" Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.953135 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-68qjg"] Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.954282 4870 scope.go:117] "RemoveContainer" containerID="6da8da76ed26c782e5e383f5d518184bc3c1090841d638246ad498d0b280627a" Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.956256 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-68qjg"] Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.977153 4870 scope.go:117] "RemoveContainer" containerID="ff290cb6fe4e36631966f53950a38e92601decbaa8ddc314a49aee62aa835eb5" Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.996581 4870 scope.go:117] "RemoveContainer" containerID="e67eb242b9a4e16cf28e9e08913657bb9191af8d4075bb5c9384bed824b5c5a4" Feb 16 17:03:49 crc kubenswrapper[4870]: E0216 17:03:49.997078 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e67eb242b9a4e16cf28e9e08913657bb9191af8d4075bb5c9384bed824b5c5a4\": container with ID starting with e67eb242b9a4e16cf28e9e08913657bb9191af8d4075bb5c9384bed824b5c5a4 not found: ID does not exist" containerID="e67eb242b9a4e16cf28e9e08913657bb9191af8d4075bb5c9384bed824b5c5a4" Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.997107 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e67eb242b9a4e16cf28e9e08913657bb9191af8d4075bb5c9384bed824b5c5a4"} err="failed to get container status \"e67eb242b9a4e16cf28e9e08913657bb9191af8d4075bb5c9384bed824b5c5a4\": rpc error: code = NotFound desc = could not find container \"e67eb242b9a4e16cf28e9e08913657bb9191af8d4075bb5c9384bed824b5c5a4\": container with ID starting with e67eb242b9a4e16cf28e9e08913657bb9191af8d4075bb5c9384bed824b5c5a4 not found: ID does not exist" Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.997128 4870 scope.go:117] "RemoveContainer" containerID="6da8da76ed26c782e5e383f5d518184bc3c1090841d638246ad498d0b280627a" Feb 16 17:03:49 crc kubenswrapper[4870]: E0216 17:03:49.997433 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6da8da76ed26c782e5e383f5d518184bc3c1090841d638246ad498d0b280627a\": container with ID starting with 6da8da76ed26c782e5e383f5d518184bc3c1090841d638246ad498d0b280627a not found: ID does not exist" containerID="6da8da76ed26c782e5e383f5d518184bc3c1090841d638246ad498d0b280627a" Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.997478 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6da8da76ed26c782e5e383f5d518184bc3c1090841d638246ad498d0b280627a"} err="failed to get container status \"6da8da76ed26c782e5e383f5d518184bc3c1090841d638246ad498d0b280627a\": rpc error: code = NotFound desc = could not find container \"6da8da76ed26c782e5e383f5d518184bc3c1090841d638246ad498d0b280627a\": container with ID starting with 6da8da76ed26c782e5e383f5d518184bc3c1090841d638246ad498d0b280627a not found: ID does not exist" Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.997511 4870 scope.go:117] "RemoveContainer" containerID="ff290cb6fe4e36631966f53950a38e92601decbaa8ddc314a49aee62aa835eb5" Feb 16 17:03:49 crc kubenswrapper[4870]: E0216 17:03:49.997981 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff290cb6fe4e36631966f53950a38e92601decbaa8ddc314a49aee62aa835eb5\": container with ID starting with ff290cb6fe4e36631966f53950a38e92601decbaa8ddc314a49aee62aa835eb5 not found: ID does not exist" containerID="ff290cb6fe4e36631966f53950a38e92601decbaa8ddc314a49aee62aa835eb5" Feb 16 17:03:49 crc kubenswrapper[4870]: I0216 17:03:49.998006 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff290cb6fe4e36631966f53950a38e92601decbaa8ddc314a49aee62aa835eb5"} err="failed to get container status \"ff290cb6fe4e36631966f53950a38e92601decbaa8ddc314a49aee62aa835eb5\": rpc error: code = NotFound desc = could not find container \"ff290cb6fe4e36631966f53950a38e92601decbaa8ddc314a49aee62aa835eb5\": container with ID starting with ff290cb6fe4e36631966f53950a38e92601decbaa8ddc314a49aee62aa835eb5 not found: ID does not exist" Feb 16 17:03:50 crc kubenswrapper[4870]: I0216 17:03:50.230072 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b75fc0db-cff7-4c59-8019-e98bc08b1a0c" path="/var/lib/kubelet/pods/b75fc0db-cff7-4c59-8019-e98bc08b1a0c/volumes" Feb 16 17:03:50 crc kubenswrapper[4870]: I0216 17:03:50.230897 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfc3b15d-ea9f-4842-8d24-0af28f83153d" path="/var/lib/kubelet/pods/cfc3b15d-ea9f-4842-8d24-0af28f83153d/volumes" Feb 16 17:03:51 crc kubenswrapper[4870]: I0216 17:03:51.453372 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tmfnv"] Feb 16 17:03:51 crc kubenswrapper[4870]: I0216 17:03:51.453909 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tmfnv" podUID="850617f1-446f-44e3-9a83-215215f95cbd" containerName="registry-server" containerID="cri-o://37bcfc67e9205886eb20534d5c894900c2e60dfa04d5b6edaa21d136a8cc8b42" gracePeriod=2 Feb 16 17:03:51 crc kubenswrapper[4870]: I0216 17:03:51.826650 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmfnv" Feb 16 17:03:51 crc kubenswrapper[4870]: I0216 17:03:51.934829 4870 generic.go:334] "Generic (PLEG): container finished" podID="850617f1-446f-44e3-9a83-215215f95cbd" containerID="37bcfc67e9205886eb20534d5c894900c2e60dfa04d5b6edaa21d136a8cc8b42" exitCode=0 Feb 16 17:03:51 crc kubenswrapper[4870]: I0216 17:03:51.934891 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmfnv" Feb 16 17:03:51 crc kubenswrapper[4870]: I0216 17:03:51.934910 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmfnv" event={"ID":"850617f1-446f-44e3-9a83-215215f95cbd","Type":"ContainerDied","Data":"37bcfc67e9205886eb20534d5c894900c2e60dfa04d5b6edaa21d136a8cc8b42"} Feb 16 17:03:51 crc kubenswrapper[4870]: I0216 17:03:51.935025 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmfnv" event={"ID":"850617f1-446f-44e3-9a83-215215f95cbd","Type":"ContainerDied","Data":"006a28e07e4eeaf1bd147e1d2520fcac5467356c5bdb70e2aecf991f9d82f18d"} Feb 16 17:03:51 crc kubenswrapper[4870]: I0216 17:03:51.935049 4870 scope.go:117] "RemoveContainer" containerID="37bcfc67e9205886eb20534d5c894900c2e60dfa04d5b6edaa21d136a8cc8b42" Feb 16 17:03:51 crc kubenswrapper[4870]: I0216 17:03:51.949799 4870 scope.go:117] "RemoveContainer" containerID="2b44010d363ef4106f93687a9a8c52e639b7d84ad7d084696efbb0a60fefbfa2" Feb 16 17:03:51 crc kubenswrapper[4870]: I0216 17:03:51.964704 4870 scope.go:117] "RemoveContainer" containerID="3eaca3f7360d9841ada91c098b276c30d3c29d002b30dd11903c8cf584353594" Feb 16 17:03:51 crc kubenswrapper[4870]: I0216 17:03:51.984563 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/850617f1-446f-44e3-9a83-215215f95cbd-utilities\") pod \"850617f1-446f-44e3-9a83-215215f95cbd\" (UID: \"850617f1-446f-44e3-9a83-215215f95cbd\") " Feb 16 17:03:51 crc kubenswrapper[4870]: I0216 17:03:51.984640 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gs4v\" (UniqueName: \"kubernetes.io/projected/850617f1-446f-44e3-9a83-215215f95cbd-kube-api-access-6gs4v\") pod \"850617f1-446f-44e3-9a83-215215f95cbd\" (UID: \"850617f1-446f-44e3-9a83-215215f95cbd\") " Feb 16 17:03:51 crc kubenswrapper[4870]: I0216 17:03:51.984711 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/850617f1-446f-44e3-9a83-215215f95cbd-catalog-content\") pod \"850617f1-446f-44e3-9a83-215215f95cbd\" (UID: \"850617f1-446f-44e3-9a83-215215f95cbd\") " Feb 16 17:03:51 crc kubenswrapper[4870]: I0216 17:03:51.985817 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/850617f1-446f-44e3-9a83-215215f95cbd-utilities" (OuterVolumeSpecName: "utilities") pod "850617f1-446f-44e3-9a83-215215f95cbd" (UID: "850617f1-446f-44e3-9a83-215215f95cbd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:03:51 crc kubenswrapper[4870]: I0216 17:03:51.990238 4870 scope.go:117] "RemoveContainer" containerID="37bcfc67e9205886eb20534d5c894900c2e60dfa04d5b6edaa21d136a8cc8b42" Feb 16 17:03:51 crc kubenswrapper[4870]: E0216 17:03:51.990825 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37bcfc67e9205886eb20534d5c894900c2e60dfa04d5b6edaa21d136a8cc8b42\": container with ID starting with 37bcfc67e9205886eb20534d5c894900c2e60dfa04d5b6edaa21d136a8cc8b42 not found: ID does not exist" containerID="37bcfc67e9205886eb20534d5c894900c2e60dfa04d5b6edaa21d136a8cc8b42" Feb 16 17:03:51 crc kubenswrapper[4870]: I0216 17:03:51.990878 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37bcfc67e9205886eb20534d5c894900c2e60dfa04d5b6edaa21d136a8cc8b42"} err="failed to get container status \"37bcfc67e9205886eb20534d5c894900c2e60dfa04d5b6edaa21d136a8cc8b42\": rpc error: code = NotFound desc = could not find container \"37bcfc67e9205886eb20534d5c894900c2e60dfa04d5b6edaa21d136a8cc8b42\": container with ID starting with 37bcfc67e9205886eb20534d5c894900c2e60dfa04d5b6edaa21d136a8cc8b42 not found: ID does not exist" Feb 16 17:03:51 crc kubenswrapper[4870]: I0216 17:03:51.990910 4870 scope.go:117] "RemoveContainer" containerID="2b44010d363ef4106f93687a9a8c52e639b7d84ad7d084696efbb0a60fefbfa2" Feb 16 17:03:51 crc kubenswrapper[4870]: I0216 17:03:51.990991 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/850617f1-446f-44e3-9a83-215215f95cbd-kube-api-access-6gs4v" (OuterVolumeSpecName: "kube-api-access-6gs4v") pod "850617f1-446f-44e3-9a83-215215f95cbd" (UID: "850617f1-446f-44e3-9a83-215215f95cbd"). InnerVolumeSpecName "kube-api-access-6gs4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:03:51 crc kubenswrapper[4870]: E0216 17:03:51.991503 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b44010d363ef4106f93687a9a8c52e639b7d84ad7d084696efbb0a60fefbfa2\": container with ID starting with 2b44010d363ef4106f93687a9a8c52e639b7d84ad7d084696efbb0a60fefbfa2 not found: ID does not exist" containerID="2b44010d363ef4106f93687a9a8c52e639b7d84ad7d084696efbb0a60fefbfa2" Feb 16 17:03:51 crc kubenswrapper[4870]: I0216 17:03:51.991556 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b44010d363ef4106f93687a9a8c52e639b7d84ad7d084696efbb0a60fefbfa2"} err="failed to get container status \"2b44010d363ef4106f93687a9a8c52e639b7d84ad7d084696efbb0a60fefbfa2\": rpc error: code = NotFound desc = could not find container \"2b44010d363ef4106f93687a9a8c52e639b7d84ad7d084696efbb0a60fefbfa2\": container with ID starting with 2b44010d363ef4106f93687a9a8c52e639b7d84ad7d084696efbb0a60fefbfa2 not found: ID does not exist" Feb 16 17:03:51 crc kubenswrapper[4870]: I0216 17:03:51.991593 4870 scope.go:117] "RemoveContainer" containerID="3eaca3f7360d9841ada91c098b276c30d3c29d002b30dd11903c8cf584353594" Feb 16 17:03:51 crc kubenswrapper[4870]: E0216 17:03:51.991979 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eaca3f7360d9841ada91c098b276c30d3c29d002b30dd11903c8cf584353594\": container with ID starting with 3eaca3f7360d9841ada91c098b276c30d3c29d002b30dd11903c8cf584353594 not found: ID does not exist" containerID="3eaca3f7360d9841ada91c098b276c30d3c29d002b30dd11903c8cf584353594" Feb 16 17:03:51 crc kubenswrapper[4870]: I0216 17:03:51.992003 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eaca3f7360d9841ada91c098b276c30d3c29d002b30dd11903c8cf584353594"} err="failed to get container status \"3eaca3f7360d9841ada91c098b276c30d3c29d002b30dd11903c8cf584353594\": rpc error: code = NotFound desc = could not find container \"3eaca3f7360d9841ada91c098b276c30d3c29d002b30dd11903c8cf584353594\": container with ID starting with 3eaca3f7360d9841ada91c098b276c30d3c29d002b30dd11903c8cf584353594 not found: ID does not exist" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.036808 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/850617f1-446f-44e3-9a83-215215f95cbd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "850617f1-446f-44e3-9a83-215215f95cbd" (UID: "850617f1-446f-44e3-9a83-215215f95cbd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.086131 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/850617f1-446f-44e3-9a83-215215f95cbd-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.086190 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gs4v\" (UniqueName: \"kubernetes.io/projected/850617f1-446f-44e3-9a83-215215f95cbd-kube-api-access-6gs4v\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.086212 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/850617f1-446f-44e3-9a83-215215f95cbd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.264744 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tmfnv"] Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.274061 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tmfnv"] Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.281421 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" podUID="db804a3b-9f2e-4638-ae79-7ef21a87104d" containerName="oauth-openshift" containerID="cri-o://3fdde906c153ce907b2a7d137a93f696ec8cd685a2c05d263d3de9f5fc6a0ded" gracePeriod=15 Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.696263 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.795653 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-trusted-ca-bundle\") pod \"db804a3b-9f2e-4638-ae79-7ef21a87104d\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.795707 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-ocp-branding-template\") pod \"db804a3b-9f2e-4638-ae79-7ef21a87104d\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.795739 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8svf4\" (UniqueName: \"kubernetes.io/projected/db804a3b-9f2e-4638-ae79-7ef21a87104d-kube-api-access-8svf4\") pod \"db804a3b-9f2e-4638-ae79-7ef21a87104d\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.796635 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "db804a3b-9f2e-4638-ae79-7ef21a87104d" (UID: "db804a3b-9f2e-4638-ae79-7ef21a87104d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.796843 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.800075 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db804a3b-9f2e-4638-ae79-7ef21a87104d-kube-api-access-8svf4" (OuterVolumeSpecName: "kube-api-access-8svf4") pod "db804a3b-9f2e-4638-ae79-7ef21a87104d" (UID: "db804a3b-9f2e-4638-ae79-7ef21a87104d"). InnerVolumeSpecName "kube-api-access-8svf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.800103 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "db804a3b-9f2e-4638-ae79-7ef21a87104d" (UID: "db804a3b-9f2e-4638-ae79-7ef21a87104d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.897026 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-audit-policies\") pod \"db804a3b-9f2e-4638-ae79-7ef21a87104d\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.897095 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-serving-cert\") pod \"db804a3b-9f2e-4638-ae79-7ef21a87104d\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.897131 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-cliconfig\") pod \"db804a3b-9f2e-4638-ae79-7ef21a87104d\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.897654 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-template-error\") pod \"db804a3b-9f2e-4638-ae79-7ef21a87104d\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.897682 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-router-certs\") pod \"db804a3b-9f2e-4638-ae79-7ef21a87104d\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.897708 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-session\") pod \"db804a3b-9f2e-4638-ae79-7ef21a87104d\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.897738 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-service-ca\") pod \"db804a3b-9f2e-4638-ae79-7ef21a87104d\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.897752 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "db804a3b-9f2e-4638-ae79-7ef21a87104d" (UID: "db804a3b-9f2e-4638-ae79-7ef21a87104d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.897768 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-idp-0-file-data\") pod \"db804a3b-9f2e-4638-ae79-7ef21a87104d\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.897829 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-template-login\") pod \"db804a3b-9f2e-4638-ae79-7ef21a87104d\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.897859 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/db804a3b-9f2e-4638-ae79-7ef21a87104d-audit-dir\") pod \"db804a3b-9f2e-4638-ae79-7ef21a87104d\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.897855 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "db804a3b-9f2e-4638-ae79-7ef21a87104d" (UID: "db804a3b-9f2e-4638-ae79-7ef21a87104d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.897893 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-template-provider-selection\") pod \"db804a3b-9f2e-4638-ae79-7ef21a87104d\" (UID: \"db804a3b-9f2e-4638-ae79-7ef21a87104d\") " Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.898172 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.898200 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8svf4\" (UniqueName: \"kubernetes.io/projected/db804a3b-9f2e-4638-ae79-7ef21a87104d-kube-api-access-8svf4\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.898217 4870 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.898235 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.898284 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db804a3b-9f2e-4638-ae79-7ef21a87104d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "db804a3b-9f2e-4638-ae79-7ef21a87104d" (UID: "db804a3b-9f2e-4638-ae79-7ef21a87104d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.898787 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "db804a3b-9f2e-4638-ae79-7ef21a87104d" (UID: "db804a3b-9f2e-4638-ae79-7ef21a87104d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.900472 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "db804a3b-9f2e-4638-ae79-7ef21a87104d" (UID: "db804a3b-9f2e-4638-ae79-7ef21a87104d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.901052 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "db804a3b-9f2e-4638-ae79-7ef21a87104d" (UID: "db804a3b-9f2e-4638-ae79-7ef21a87104d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.901189 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "db804a3b-9f2e-4638-ae79-7ef21a87104d" (UID: "db804a3b-9f2e-4638-ae79-7ef21a87104d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.901855 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "db804a3b-9f2e-4638-ae79-7ef21a87104d" (UID: "db804a3b-9f2e-4638-ae79-7ef21a87104d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.902085 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "db804a3b-9f2e-4638-ae79-7ef21a87104d" (UID: "db804a3b-9f2e-4638-ae79-7ef21a87104d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.902447 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "db804a3b-9f2e-4638-ae79-7ef21a87104d" (UID: "db804a3b-9f2e-4638-ae79-7ef21a87104d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.903131 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "db804a3b-9f2e-4638-ae79-7ef21a87104d" (UID: "db804a3b-9f2e-4638-ae79-7ef21a87104d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.942396 4870 generic.go:334] "Generic (PLEG): container finished" podID="db804a3b-9f2e-4638-ae79-7ef21a87104d" containerID="3fdde906c153ce907b2a7d137a93f696ec8cd685a2c05d263d3de9f5fc6a0ded" exitCode=0 Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.942440 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.942445 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" event={"ID":"db804a3b-9f2e-4638-ae79-7ef21a87104d","Type":"ContainerDied","Data":"3fdde906c153ce907b2a7d137a93f696ec8cd685a2c05d263d3de9f5fc6a0ded"} Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.942501 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5ll8r" event={"ID":"db804a3b-9f2e-4638-ae79-7ef21a87104d","Type":"ContainerDied","Data":"16805e682513b26781d5e0e95495257e705a91f30cb8c5543b43588924cc3255"} Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.942521 4870 scope.go:117] "RemoveContainer" containerID="3fdde906c153ce907b2a7d137a93f696ec8cd685a2c05d263d3de9f5fc6a0ded" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.966584 4870 scope.go:117] "RemoveContainer" containerID="3fdde906c153ce907b2a7d137a93f696ec8cd685a2c05d263d3de9f5fc6a0ded" Feb 16 17:03:52 crc kubenswrapper[4870]: E0216 17:03:52.967385 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fdde906c153ce907b2a7d137a93f696ec8cd685a2c05d263d3de9f5fc6a0ded\": container with ID starting with 3fdde906c153ce907b2a7d137a93f696ec8cd685a2c05d263d3de9f5fc6a0ded not found: ID does not exist" containerID="3fdde906c153ce907b2a7d137a93f696ec8cd685a2c05d263d3de9f5fc6a0ded" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.967477 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fdde906c153ce907b2a7d137a93f696ec8cd685a2c05d263d3de9f5fc6a0ded"} err="failed to get container status \"3fdde906c153ce907b2a7d137a93f696ec8cd685a2c05d263d3de9f5fc6a0ded\": rpc error: code = NotFound desc = could not find container \"3fdde906c153ce907b2a7d137a93f696ec8cd685a2c05d263d3de9f5fc6a0ded\": container with ID starting with 3fdde906c153ce907b2a7d137a93f696ec8cd685a2c05d263d3de9f5fc6a0ded not found: ID does not exist" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.987852 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5ll8r"] Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.990668 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5ll8r"] Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.998913 4870 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/db804a3b-9f2e-4638-ae79-7ef21a87104d-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.999060 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.999134 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.999214 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.999277 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.999343 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.999407 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.999473 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:52 crc kubenswrapper[4870]: I0216 17:03:52.999533 4870 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/db804a3b-9f2e-4638-ae79-7ef21a87104d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:54 crc kubenswrapper[4870]: I0216 17:03:54.236196 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="850617f1-446f-44e3-9a83-215215f95cbd" path="/var/lib/kubelet/pods/850617f1-446f-44e3-9a83-215215f95cbd/volumes" Feb 16 17:03:54 crc kubenswrapper[4870]: I0216 17:03:54.238201 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db804a3b-9f2e-4638-ae79-7ef21a87104d" path="/var/lib/kubelet/pods/db804a3b-9f2e-4638-ae79-7ef21a87104d/volumes" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.992676 4870 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 17:03:55 crc kubenswrapper[4870]: E0216 17:03:55.992930 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc3b15d-ea9f-4842-8d24-0af28f83153d" containerName="extract-utilities" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.992967 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc3b15d-ea9f-4842-8d24-0af28f83153d" containerName="extract-utilities" Feb 16 17:03:55 crc kubenswrapper[4870]: E0216 17:03:55.992982 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b75fc0db-cff7-4c59-8019-e98bc08b1a0c" containerName="extract-utilities" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.992990 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75fc0db-cff7-4c59-8019-e98bc08b1a0c" containerName="extract-utilities" Feb 16 17:03:55 crc kubenswrapper[4870]: E0216 17:03:55.993007 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b75fc0db-cff7-4c59-8019-e98bc08b1a0c" containerName="extract-content" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.993015 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75fc0db-cff7-4c59-8019-e98bc08b1a0c" containerName="extract-content" Feb 16 17:03:55 crc kubenswrapper[4870]: E0216 17:03:55.993029 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc3b15d-ea9f-4842-8d24-0af28f83153d" containerName="extract-content" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.993037 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc3b15d-ea9f-4842-8d24-0af28f83153d" containerName="extract-content" Feb 16 17:03:55 crc kubenswrapper[4870]: E0216 17:03:55.993049 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850617f1-446f-44e3-9a83-215215f95cbd" containerName="registry-server" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.993057 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="850617f1-446f-44e3-9a83-215215f95cbd" containerName="registry-server" Feb 16 17:03:55 crc kubenswrapper[4870]: E0216 17:03:55.993072 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e0dfd3-85a3-45b6-889d-31159c5a23cb" containerName="registry-server" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.993080 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e0dfd3-85a3-45b6-889d-31159c5a23cb" containerName="registry-server" Feb 16 17:03:55 crc kubenswrapper[4870]: E0216 17:03:55.993094 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc3b15d-ea9f-4842-8d24-0af28f83153d" containerName="registry-server" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.993103 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc3b15d-ea9f-4842-8d24-0af28f83153d" containerName="registry-server" Feb 16 17:03:55 crc kubenswrapper[4870]: E0216 17:03:55.993116 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e0dfd3-85a3-45b6-889d-31159c5a23cb" containerName="extract-utilities" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.993152 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e0dfd3-85a3-45b6-889d-31159c5a23cb" containerName="extract-utilities" Feb 16 17:03:55 crc kubenswrapper[4870]: E0216 17:03:55.993166 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b75fc0db-cff7-4c59-8019-e98bc08b1a0c" containerName="registry-server" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.993175 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75fc0db-cff7-4c59-8019-e98bc08b1a0c" containerName="registry-server" Feb 16 17:03:55 crc kubenswrapper[4870]: E0216 17:03:55.993188 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db804a3b-9f2e-4638-ae79-7ef21a87104d" containerName="oauth-openshift" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.993198 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="db804a3b-9f2e-4638-ae79-7ef21a87104d" containerName="oauth-openshift" Feb 16 17:03:55 crc kubenswrapper[4870]: E0216 17:03:55.993210 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850617f1-446f-44e3-9a83-215215f95cbd" containerName="extract-utilities" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.993219 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="850617f1-446f-44e3-9a83-215215f95cbd" containerName="extract-utilities" Feb 16 17:03:55 crc kubenswrapper[4870]: E0216 17:03:55.993228 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e0dfd3-85a3-45b6-889d-31159c5a23cb" containerName="extract-content" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.993238 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e0dfd3-85a3-45b6-889d-31159c5a23cb" containerName="extract-content" Feb 16 17:03:55 crc kubenswrapper[4870]: E0216 17:03:55.993249 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850617f1-446f-44e3-9a83-215215f95cbd" containerName="extract-content" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.993258 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="850617f1-446f-44e3-9a83-215215f95cbd" containerName="extract-content" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.993391 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="b75fc0db-cff7-4c59-8019-e98bc08b1a0c" containerName="registry-server" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.993407 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="db804a3b-9f2e-4638-ae79-7ef21a87104d" containerName="oauth-openshift" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.993423 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="850617f1-446f-44e3-9a83-215215f95cbd" containerName="registry-server" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.993433 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="11e0dfd3-85a3-45b6-889d-31159c5a23cb" containerName="registry-server" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.993444 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc3b15d-ea9f-4842-8d24-0af28f83153d" containerName="registry-server" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.993786 4870 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.994086 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee" gracePeriod=15 Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.994124 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d" gracePeriod=15 Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.994204 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2" gracePeriod=15 Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.994250 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87" gracePeriod=15 Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.994276 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35" gracePeriod=15 Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.994361 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.996900 4870 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 17:03:55 crc kubenswrapper[4870]: E0216 17:03:55.997417 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.997555 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 17:03:55 crc kubenswrapper[4870]: E0216 17:03:55.997700 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.997819 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 17:03:55 crc kubenswrapper[4870]: E0216 17:03:55.997930 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.998095 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 17:03:55 crc kubenswrapper[4870]: E0216 17:03:55.998221 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.998333 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 17:03:55 crc kubenswrapper[4870]: E0216 17:03:55.998467 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.998575 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 17:03:55 crc kubenswrapper[4870]: E0216 17:03:55.998691 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.998810 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 17:03:55 crc kubenswrapper[4870]: E0216 17:03:55.999757 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 17:03:55 crc kubenswrapper[4870]: I0216 17:03:55.999925 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.000240 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.000365 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.000475 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.000597 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.000693 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.000852 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 17:03:56 crc kubenswrapper[4870]: E0216 17:03:56.001176 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.001336 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.001560 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.038557 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.038625 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.038735 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.038783 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.038813 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.038915 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.039021 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.039086 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.141617 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.141675 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.141719 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.141742 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.141759 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.141770 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.141814 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.141836 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.141776 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.141858 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.141889 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.141907 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.141731 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.141979 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.141984 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.142028 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.225257 4870 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:03:56 crc kubenswrapper[4870]: E0216 17:03:56.370913 4870 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:03:56 crc kubenswrapper[4870]: E0216 17:03:56.371920 4870 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:03:56 crc kubenswrapper[4870]: E0216 17:03:56.372290 4870 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:03:56 crc kubenswrapper[4870]: E0216 17:03:56.376474 4870 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:03:56 crc kubenswrapper[4870]: E0216 17:03:56.377072 4870 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.377115 4870 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 17:03:56 crc kubenswrapper[4870]: E0216 17:03:56.377527 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="200ms" Feb 16 17:03:56 crc kubenswrapper[4870]: E0216 17:03:56.578200 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="400ms" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.977602 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 17:03:56 crc kubenswrapper[4870]: E0216 17:03:56.978849 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="800ms" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.979540 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.980399 4870 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35" exitCode=0 Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.980427 4870 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d" exitCode=0 Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.980435 4870 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2" exitCode=0 Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.980443 4870 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87" exitCode=2 Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.980512 4870 scope.go:117] "RemoveContainer" containerID="e3b1270bea217c5879320b6a11eab97ab7db94635c3e526a9315ca58d0b84c45" Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.992318 4870 generic.go:334] "Generic (PLEG): container finished" podID="fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a" containerID="d1b7d562b23ab624d2cc820dbd4491bd117ae564bb7b50e1df70ba2cc7e0f7df" exitCode=0 Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.992375 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a","Type":"ContainerDied","Data":"d1b7d562b23ab624d2cc820dbd4491bd117ae564bb7b50e1df70ba2cc7e0f7df"} Feb 16 17:03:56 crc kubenswrapper[4870]: I0216 17:03:56.993649 4870 status_manager.go:851] "Failed to get status for pod" podUID="fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:03:57 crc kubenswrapper[4870]: E0216 17:03:57.753525 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:03:57Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:03:57Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:03:57Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:03:57Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:03:57 crc kubenswrapper[4870]: E0216 17:03:57.753717 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:03:57 crc kubenswrapper[4870]: E0216 17:03:57.753874 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:03:57 crc kubenswrapper[4870]: E0216 17:03:57.754079 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:03:57 crc kubenswrapper[4870]: E0216 17:03:57.754254 4870 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:03:57 crc kubenswrapper[4870]: E0216 17:03:57.754269 4870 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 17:03:57 crc kubenswrapper[4870]: E0216 17:03:57.780451 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="1.6s" Feb 16 17:03:57 crc kubenswrapper[4870]: I0216 17:03:57.998790 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 17:03:58 crc kubenswrapper[4870]: E0216 17:03:58.334792 4870 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.344205 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.344907 4870 status_manager.go:851] "Failed to get status for pod" podUID="fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.350898 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.351705 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.352306 4870 status_manager.go:851] "Failed to get status for pod" podUID="fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.352720 4870 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.469352 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a-kube-api-access\") pod \"fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a\" (UID: \"fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a\") " Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.469411 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.469432 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.469485 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a-kubelet-dir\") pod \"fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a\" (UID: \"fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a\") " Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.469529 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a-var-lock\") pod \"fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a\" (UID: \"fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a\") " Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.469578 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.469812 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.469850 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a" (UID: "fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.469864 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a-var-lock" (OuterVolumeSpecName: "var-lock") pod "fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a" (UID: "fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.470700 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.471017 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.475003 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a" (UID: "fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.570590 4870 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.570627 4870 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.570636 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.570647 4870 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.570655 4870 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:58 crc kubenswrapper[4870]: I0216 17:03:58.570663 4870 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.016258 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a","Type":"ContainerDied","Data":"d1cf09728a5f607507bb3efcc5e5befc2856a901f08e3240d06574f448dcacc5"} Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.016310 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1cf09728a5f607507bb3efcc5e5befc2856a901f08e3240d06574f448dcacc5" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.016364 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.022529 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.023689 4870 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee" exitCode=0 Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.023825 4870 scope.go:117] "RemoveContainer" containerID="bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.024270 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.036903 4870 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.037257 4870 status_manager.go:851] "Failed to get status for pod" podUID="fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.043385 4870 status_manager.go:851] "Failed to get status for pod" podUID="fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.043665 4870 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.048984 4870 scope.go:117] "RemoveContainer" containerID="c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.071418 4870 scope.go:117] "RemoveContainer" containerID="fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.086741 4870 scope.go:117] "RemoveContainer" containerID="be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.105202 4870 scope.go:117] "RemoveContainer" containerID="a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.123127 4870 scope.go:117] "RemoveContainer" containerID="db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.141083 4870 scope.go:117] "RemoveContainer" containerID="bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35" Feb 16 17:03:59 crc kubenswrapper[4870]: E0216 17:03:59.144217 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35\": container with ID starting with bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35 not found: ID does not exist" containerID="bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.144278 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35"} err="failed to get container status \"bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35\": rpc error: code = NotFound desc = could not find container \"bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35\": container with ID starting with bd12487859a1df586f63037a15d4178630e9addae4804155463421c81a12fb35 not found: ID does not exist" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.144316 4870 scope.go:117] "RemoveContainer" containerID="c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d" Feb 16 17:03:59 crc kubenswrapper[4870]: E0216 17:03:59.145132 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\": container with ID starting with c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d not found: ID does not exist" containerID="c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.145163 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d"} err="failed to get container status \"c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\": rpc error: code = NotFound desc = could not find container \"c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d\": container with ID starting with c7e4e0b95f23bbf27dffa916ffe55adff2cb2e64f0fbfc0658375881e7bf7e6d not found: ID does not exist" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.145181 4870 scope.go:117] "RemoveContainer" containerID="fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2" Feb 16 17:03:59 crc kubenswrapper[4870]: E0216 17:03:59.146180 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\": container with ID starting with fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2 not found: ID does not exist" containerID="fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.146216 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2"} err="failed to get container status \"fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\": rpc error: code = NotFound desc = could not find container \"fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2\": container with ID starting with fa893c73eb12b4d13f0d97ef60f1f9be732a1ebe4d93ca1c656f1aabcd804bc2 not found: ID does not exist" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.146235 4870 scope.go:117] "RemoveContainer" containerID="be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87" Feb 16 17:03:59 crc kubenswrapper[4870]: E0216 17:03:59.146669 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\": container with ID starting with be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87 not found: ID does not exist" containerID="be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.146698 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87"} err="failed to get container status \"be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\": rpc error: code = NotFound desc = could not find container \"be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87\": container with ID starting with be79bd6550ec4d45dc71b0b5a8f57910df75c0038f04a391f261fe260075dd87 not found: ID does not exist" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.146713 4870 scope.go:117] "RemoveContainer" containerID="a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee" Feb 16 17:03:59 crc kubenswrapper[4870]: E0216 17:03:59.146986 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\": container with ID starting with a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee not found: ID does not exist" containerID="a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.147012 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee"} err="failed to get container status \"a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\": rpc error: code = NotFound desc = could not find container \"a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee\": container with ID starting with a6faa1bf11244e3366332edd9b111e6599fff9a8f39cd3de54976ee57bb805ee not found: ID does not exist" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.147027 4870 scope.go:117] "RemoveContainer" containerID="db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c" Feb 16 17:03:59 crc kubenswrapper[4870]: E0216 17:03:59.147266 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\": container with ID starting with db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c not found: ID does not exist" containerID="db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c" Feb 16 17:03:59 crc kubenswrapper[4870]: I0216 17:03:59.147288 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c"} err="failed to get container status \"db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\": rpc error: code = NotFound desc = could not find container \"db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c\": container with ID starting with db12109c6f4dd2a391c95e1465b431332f22829881f8f19369aef908de4e050c not found: ID does not exist" Feb 16 17:03:59 crc kubenswrapper[4870]: E0216 17:03:59.381218 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="3.2s" Feb 16 17:04:00 crc kubenswrapper[4870]: I0216 17:04:00.230668 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 16 17:04:01 crc kubenswrapper[4870]: E0216 17:04:01.035896 4870 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.204:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:04:01 crc kubenswrapper[4870]: I0216 17:04:01.036352 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:04:01 crc kubenswrapper[4870]: E0216 17:04:01.063583 4870 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.204:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894c8e566d46e51 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 17:04:01.061711441 +0000 UTC m=+245.545175825,LastTimestamp:2026-02-16 17:04:01.061711441 +0000 UTC m=+245.545175825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 17:04:02 crc kubenswrapper[4870]: I0216 17:04:02.046029 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7acc930c272aa45fb35dabf82915cb8cbf8f2df7b43ed68e1f641679d3553668"} Feb 16 17:04:02 crc kubenswrapper[4870]: I0216 17:04:02.046470 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9dc3158307cdf5f5c0934b5c19f4ccc953db9949a1813daba23d36b460e5e7f3"} Feb 16 17:04:02 crc kubenswrapper[4870]: I0216 17:04:02.047775 4870 status_manager.go:851] "Failed to get status for pod" podUID="fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:04:02 crc kubenswrapper[4870]: E0216 17:04:02.047782 4870 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.204:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:04:02 crc kubenswrapper[4870]: E0216 17:04:02.583490 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="6.4s" Feb 16 17:04:06 crc kubenswrapper[4870]: I0216 17:04:06.226163 4870 status_manager.go:851] "Failed to get status for pod" podUID="fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:04:07 crc kubenswrapper[4870]: E0216 17:04:07.035087 4870 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.204:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894c8e566d46e51 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 17:04:01.061711441 +0000 UTC m=+245.545175825,LastTimestamp:2026-02-16 17:04:01.061711441 +0000 UTC m=+245.545175825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 17:04:08 crc kubenswrapper[4870]: E0216 17:04:08.985277 4870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.204:6443: connect: connection refused" interval="7s" Feb 16 17:04:09 crc kubenswrapper[4870]: I0216 17:04:09.222317 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:04:09 crc kubenswrapper[4870]: I0216 17:04:09.223535 4870 status_manager.go:851] "Failed to get status for pod" podUID="fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:04:09 crc kubenswrapper[4870]: I0216 17:04:09.239443 4870 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dcb39c2a-789a-40d5-b431-9d436bcc54dd" Feb 16 17:04:09 crc kubenswrapper[4870]: I0216 17:04:09.239474 4870 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dcb39c2a-789a-40d5-b431-9d436bcc54dd" Feb 16 17:04:09 crc kubenswrapper[4870]: E0216 17:04:09.240043 4870 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:04:09 crc kubenswrapper[4870]: I0216 17:04:09.240654 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:04:09 crc kubenswrapper[4870]: W0216 17:04:09.263868 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-128f485ed6354fa9389f4b17b10804d2fe47454bf9e49a46f9e1f24c56d5ff98 WatchSource:0}: Error finding container 128f485ed6354fa9389f4b17b10804d2fe47454bf9e49a46f9e1f24c56d5ff98: Status 404 returned error can't find the container with id 128f485ed6354fa9389f4b17b10804d2fe47454bf9e49a46f9e1f24c56d5ff98 Feb 16 17:04:10 crc kubenswrapper[4870]: I0216 17:04:10.091533 4870 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="ad694ccce62f7618666c28f9b3865c7a58f2ca3e60f932860cb350cb82289702" exitCode=0 Feb 16 17:04:10 crc kubenswrapper[4870]: I0216 17:04:10.091594 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"ad694ccce62f7618666c28f9b3865c7a58f2ca3e60f932860cb350cb82289702"} Feb 16 17:04:10 crc kubenswrapper[4870]: I0216 17:04:10.091625 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"128f485ed6354fa9389f4b17b10804d2fe47454bf9e49a46f9e1f24c56d5ff98"} Feb 16 17:04:10 crc kubenswrapper[4870]: I0216 17:04:10.092036 4870 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dcb39c2a-789a-40d5-b431-9d436bcc54dd" Feb 16 17:04:10 crc kubenswrapper[4870]: I0216 17:04:10.092056 4870 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dcb39c2a-789a-40d5-b431-9d436bcc54dd" Feb 16 17:04:10 crc kubenswrapper[4870]: E0216 17:04:10.092510 4870 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:04:10 crc kubenswrapper[4870]: I0216 17:04:10.092570 4870 status_manager.go:851] "Failed to get status for pod" podUID="fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.204:6443: connect: connection refused" Feb 16 17:04:11 crc kubenswrapper[4870]: I0216 17:04:11.117484 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 17:04:11 crc kubenswrapper[4870]: I0216 17:04:11.117780 4870 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291" exitCode=1 Feb 16 17:04:11 crc kubenswrapper[4870]: I0216 17:04:11.117880 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291"} Feb 16 17:04:11 crc kubenswrapper[4870]: I0216 17:04:11.118424 4870 scope.go:117] "RemoveContainer" containerID="46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291" Feb 16 17:04:11 crc kubenswrapper[4870]: I0216 17:04:11.128064 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"563a7568f2d694d67f6ee9a3482401f12ce07deed847d6dcf7103546654dc6be"} Feb 16 17:04:11 crc kubenswrapper[4870]: I0216 17:04:11.128151 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0eb8e68c2c717138bfd02012c379fbdb64f8df5c9a79a399689d802026825b99"} Feb 16 17:04:11 crc kubenswrapper[4870]: I0216 17:04:11.128160 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2fabe47e55927715691a932d7d55f463e4c2e8b1edc0f9947bac5316dbc8df82"} Feb 16 17:04:11 crc kubenswrapper[4870]: I0216 17:04:11.128169 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"79eda0251bf7454efe342efbb48d04f69f24de0da6cf36a2e30497e858a4d0ec"} Feb 16 17:04:12 crc kubenswrapper[4870]: I0216 17:04:12.136288 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 17:04:12 crc kubenswrapper[4870]: I0216 17:04:12.137081 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9cb85c34b0e11af6824668a7e520c06779fb26a167a8acfd5ad5b668ebc6b99d"} Feb 16 17:04:12 crc kubenswrapper[4870]: I0216 17:04:12.140083 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8d105dc944e5efd17319f7eea68b2193f8e174b6414b0636b028a9fbcd3b8ee5"} Feb 16 17:04:12 crc kubenswrapper[4870]: I0216 17:04:12.140309 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:04:12 crc kubenswrapper[4870]: I0216 17:04:12.140394 4870 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dcb39c2a-789a-40d5-b431-9d436bcc54dd" Feb 16 17:04:12 crc kubenswrapper[4870]: I0216 17:04:12.140417 4870 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dcb39c2a-789a-40d5-b431-9d436bcc54dd" Feb 16 17:04:13 crc kubenswrapper[4870]: I0216 17:04:13.062542 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 17:04:13 crc kubenswrapper[4870]: I0216 17:04:13.062992 4870 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 17:04:13 crc kubenswrapper[4870]: I0216 17:04:13.063135 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 17:04:14 crc kubenswrapper[4870]: I0216 17:04:14.241595 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:04:14 crc kubenswrapper[4870]: I0216 17:04:14.241909 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:04:14 crc kubenswrapper[4870]: I0216 17:04:14.250582 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:04:16 crc kubenswrapper[4870]: I0216 17:04:16.059148 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 17:04:17 crc kubenswrapper[4870]: I0216 17:04:17.148135 4870 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:04:18 crc kubenswrapper[4870]: I0216 17:04:18.171617 4870 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dcb39c2a-789a-40d5-b431-9d436bcc54dd" Feb 16 17:04:18 crc kubenswrapper[4870]: I0216 17:04:18.171968 4870 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dcb39c2a-789a-40d5-b431-9d436bcc54dd" Feb 16 17:04:18 crc kubenswrapper[4870]: I0216 17:04:18.175804 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:04:18 crc kubenswrapper[4870]: I0216 17:04:18.181693 4870 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="fe3b7688-342e-44e0-ab6b-3df385991fe3" Feb 16 17:04:19 crc kubenswrapper[4870]: I0216 17:04:19.176344 4870 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dcb39c2a-789a-40d5-b431-9d436bcc54dd" Feb 16 17:04:19 crc kubenswrapper[4870]: I0216 17:04:19.176374 4870 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dcb39c2a-789a-40d5-b431-9d436bcc54dd" Feb 16 17:04:23 crc kubenswrapper[4870]: I0216 17:04:23.063810 4870 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 17:04:23 crc kubenswrapper[4870]: I0216 17:04:23.064467 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 17:04:26 crc kubenswrapper[4870]: I0216 17:04:26.237025 4870 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="fe3b7688-342e-44e0-ab6b-3df385991fe3" Feb 16 17:04:26 crc kubenswrapper[4870]: I0216 17:04:26.625331 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 17:04:27 crc kubenswrapper[4870]: I0216 17:04:27.696851 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 17:04:27 crc kubenswrapper[4870]: I0216 17:04:27.765111 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 17:04:27 crc kubenswrapper[4870]: I0216 17:04:27.933568 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 17:04:28 crc kubenswrapper[4870]: I0216 17:04:28.220792 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 17:04:28 crc kubenswrapper[4870]: I0216 17:04:28.242762 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 17:04:28 crc kubenswrapper[4870]: I0216 17:04:28.529377 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 17:04:28 crc kubenswrapper[4870]: I0216 17:04:28.794643 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 17:04:28 crc kubenswrapper[4870]: I0216 17:04:28.957542 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 17:04:29 crc kubenswrapper[4870]: I0216 17:04:29.030652 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 17:04:29 crc kubenswrapper[4870]: I0216 17:04:29.052715 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 17:04:29 crc kubenswrapper[4870]: I0216 17:04:29.453239 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 17:04:29 crc kubenswrapper[4870]: I0216 17:04:29.959728 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 17:04:30 crc kubenswrapper[4870]: I0216 17:04:30.487257 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 17:04:30 crc kubenswrapper[4870]: I0216 17:04:30.800551 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 17:04:30 crc kubenswrapper[4870]: I0216 17:04:30.801848 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 17:04:30 crc kubenswrapper[4870]: I0216 17:04:30.808889 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 17:04:30 crc kubenswrapper[4870]: I0216 17:04:30.811417 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 17:04:31 crc kubenswrapper[4870]: I0216 17:04:31.043205 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 17:04:31 crc kubenswrapper[4870]: I0216 17:04:31.146987 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 17:04:31 crc kubenswrapper[4870]: I0216 17:04:31.215737 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 17:04:31 crc kubenswrapper[4870]: I0216 17:04:31.268198 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 17:04:31 crc kubenswrapper[4870]: I0216 17:04:31.280270 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 17:04:31 crc kubenswrapper[4870]: I0216 17:04:31.312593 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 17:04:31 crc kubenswrapper[4870]: I0216 17:04:31.317837 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 17:04:31 crc kubenswrapper[4870]: I0216 17:04:31.338772 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 17:04:31 crc kubenswrapper[4870]: I0216 17:04:31.373355 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 17:04:31 crc kubenswrapper[4870]: I0216 17:04:31.384047 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 17:04:31 crc kubenswrapper[4870]: I0216 17:04:31.478343 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 17:04:31 crc kubenswrapper[4870]: I0216 17:04:31.537056 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 17:04:31 crc kubenswrapper[4870]: I0216 17:04:31.555219 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 17:04:31 crc kubenswrapper[4870]: I0216 17:04:31.571334 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 17:04:31 crc kubenswrapper[4870]: I0216 17:04:31.596723 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 17:04:31 crc kubenswrapper[4870]: I0216 17:04:31.681069 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 17:04:31 crc kubenswrapper[4870]: I0216 17:04:31.830710 4870 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 17:04:31 crc kubenswrapper[4870]: I0216 17:04:31.865662 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 17:04:31 crc kubenswrapper[4870]: I0216 17:04:31.959091 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.043356 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.054269 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.057141 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.064584 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.138035 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.189392 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.396926 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.462830 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.465333 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.479661 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.482558 4870 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.486366 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.486419 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh"] Feb 16 17:04:32 crc kubenswrapper[4870]: E0216 17:04:32.486600 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a" containerName="installer" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.486619 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a" containerName="installer" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.486705 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb8a8d4c-5fe8-48ab-a9c4-39d44650f60a" containerName="installer" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.487099 4870 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dcb39c2a-789a-40d5-b431-9d436bcc54dd" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.487164 4870 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dcb39c2a-789a-40d5-b431-9d436bcc54dd" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.487367 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.492483 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.493349 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.493410 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.493457 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.493360 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.493573 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.493426 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.493832 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.493895 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.493931 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.494073 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.494160 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.494268 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.494583 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.501708 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.504044 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.504200 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.507424 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.515823 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.516185 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.516172813 podStartE2EDuration="15.516172813s" podCreationTimestamp="2026-02-16 17:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:04:32.513375242 +0000 UTC m=+276.996839626" watchObservedRunningTime="2026-02-16 17:04:32.516172813 +0000 UTC m=+276.999637197" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.632010 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-system-session\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.632058 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/237284c6-d0bc-4aa4-8916-2726b7e48497-audit-policies\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.632095 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.632130 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-user-template-error\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.632533 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-user-template-login\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.632564 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.632640 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.632669 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.632737 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/237284c6-d0bc-4aa4-8916-2726b7e48497-audit-dir\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.632790 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx9x7\" (UniqueName: \"kubernetes.io/projected/237284c6-d0bc-4aa4-8916-2726b7e48497-kube-api-access-xx9x7\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.632856 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.632883 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.632933 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.633002 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.726319 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.733829 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-user-template-error\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.734152 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-user-template-login\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.734287 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.734400 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.734519 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.734638 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/237284c6-d0bc-4aa4-8916-2726b7e48497-audit-dir\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.734759 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx9x7\" (UniqueName: \"kubernetes.io/projected/237284c6-d0bc-4aa4-8916-2726b7e48497-kube-api-access-xx9x7\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.734900 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.735074 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.735213 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.735336 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.735556 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-system-session\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.734915 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/237284c6-d0bc-4aa4-8916-2726b7e48497-audit-dir\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.735763 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/237284c6-d0bc-4aa4-8916-2726b7e48497-audit-policies\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.735881 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.738437 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.740143 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.740184 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.740599 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-user-template-error\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.740617 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-system-session\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.740629 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-user-template-login\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.741083 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.741262 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/237284c6-d0bc-4aa4-8916-2726b7e48497-audit-policies\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.741490 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.742073 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.742097 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.751550 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/237284c6-d0bc-4aa4-8916-2726b7e48497-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.753535 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx9x7\" (UniqueName: \"kubernetes.io/projected/237284c6-d0bc-4aa4-8916-2726b7e48497-kube-api-access-xx9x7\") pod \"oauth-openshift-7f5b9fd94b-97pqh\" (UID: \"237284c6-d0bc-4aa4-8916-2726b7e48497\") " pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.813743 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.861447 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.862189 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.879789 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.904497 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.905620 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.960513 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 17:04:32 crc kubenswrapper[4870]: I0216 17:04:32.983590 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.052158 4870 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.056540 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.063096 4870 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.063147 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.063236 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.063794 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"9cb85c34b0e11af6824668a7e520c06779fb26a167a8acfd5ad5b668ebc6b99d"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.063906 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://9cb85c34b0e11af6824668a7e520c06779fb26a167a8acfd5ad5b668ebc6b99d" gracePeriod=30 Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.078699 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.090486 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.174021 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.190303 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.207358 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.214331 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.310740 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.392233 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.393497 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.419096 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.495361 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.499203 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.634744 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.775669 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.942321 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.958280 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 17:04:33 crc kubenswrapper[4870]: I0216 17:04:33.969231 4870 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 17:04:34 crc kubenswrapper[4870]: I0216 17:04:34.133671 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 17:04:34 crc kubenswrapper[4870]: I0216 17:04:34.143542 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 17:04:34 crc kubenswrapper[4870]: I0216 17:04:34.145446 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 17:04:34 crc kubenswrapper[4870]: I0216 17:04:34.245971 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 17:04:34 crc kubenswrapper[4870]: I0216 17:04:34.252403 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 17:04:34 crc kubenswrapper[4870]: I0216 17:04:34.272842 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 17:04:34 crc kubenswrapper[4870]: I0216 17:04:34.416585 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 17:04:34 crc kubenswrapper[4870]: I0216 17:04:34.576006 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 17:04:34 crc kubenswrapper[4870]: I0216 17:04:34.583927 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 17:04:34 crc kubenswrapper[4870]: I0216 17:04:34.636180 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 17:04:34 crc kubenswrapper[4870]: I0216 17:04:34.657791 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 17:04:34 crc kubenswrapper[4870]: I0216 17:04:34.715581 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 17:04:34 crc kubenswrapper[4870]: I0216 17:04:34.728168 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 17:04:34 crc kubenswrapper[4870]: I0216 17:04:34.883054 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 17:04:34 crc kubenswrapper[4870]: I0216 17:04:34.902928 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 17:04:35 crc kubenswrapper[4870]: I0216 17:04:35.086351 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 17:04:35 crc kubenswrapper[4870]: I0216 17:04:35.095783 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 17:04:35 crc kubenswrapper[4870]: I0216 17:04:35.097340 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 17:04:35 crc kubenswrapper[4870]: I0216 17:04:35.097978 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 17:04:35 crc kubenswrapper[4870]: I0216 17:04:35.237316 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 17:04:35 crc kubenswrapper[4870]: I0216 17:04:35.341018 4870 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 17:04:35 crc kubenswrapper[4870]: I0216 17:04:35.395040 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 17:04:35 crc kubenswrapper[4870]: I0216 17:04:35.444762 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 17:04:35 crc kubenswrapper[4870]: I0216 17:04:35.530727 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 17:04:35 crc kubenswrapper[4870]: I0216 17:04:35.584347 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 17:04:35 crc kubenswrapper[4870]: I0216 17:04:35.662370 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 17:04:35 crc kubenswrapper[4870]: I0216 17:04:35.666702 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 17:04:35 crc kubenswrapper[4870]: I0216 17:04:35.683919 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 17:04:35 crc kubenswrapper[4870]: I0216 17:04:35.684483 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 17:04:35 crc kubenswrapper[4870]: I0216 17:04:35.799680 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 17:04:35 crc kubenswrapper[4870]: I0216 17:04:35.876738 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 17:04:35 crc kubenswrapper[4870]: I0216 17:04:35.932120 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 17:04:35 crc kubenswrapper[4870]: I0216 17:04:35.980698 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.003268 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.007724 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.007737 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.107347 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.141012 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.144695 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.198049 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.320722 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.331856 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.391891 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.391933 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.419860 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.456716 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.475707 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.495499 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.531384 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.565701 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.573180 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.616348 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.620564 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.636910 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.680636 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.780403 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.822387 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 17:04:36 crc kubenswrapper[4870]: I0216 17:04:36.846879 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 17:04:37 crc kubenswrapper[4870]: I0216 17:04:37.046155 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 17:04:37 crc kubenswrapper[4870]: I0216 17:04:37.073907 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 17:04:37 crc kubenswrapper[4870]: I0216 17:04:37.110317 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 17:04:37 crc kubenswrapper[4870]: I0216 17:04:37.153965 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 17:04:37 crc kubenswrapper[4870]: I0216 17:04:37.210556 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 17:04:37 crc kubenswrapper[4870]: I0216 17:04:37.274784 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 17:04:37 crc kubenswrapper[4870]: I0216 17:04:37.318096 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 17:04:37 crc kubenswrapper[4870]: I0216 17:04:37.628892 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 17:04:37 crc kubenswrapper[4870]: I0216 17:04:37.649633 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 17:04:37 crc kubenswrapper[4870]: I0216 17:04:37.658685 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 17:04:37 crc kubenswrapper[4870]: I0216 17:04:37.664516 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 17:04:37 crc kubenswrapper[4870]: I0216 17:04:37.728981 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 17:04:37 crc kubenswrapper[4870]: I0216 17:04:37.743872 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 17:04:37 crc kubenswrapper[4870]: I0216 17:04:37.783853 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 17:04:37 crc kubenswrapper[4870]: I0216 17:04:37.788846 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 17:04:37 crc kubenswrapper[4870]: I0216 17:04:37.789028 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 17:04:37 crc kubenswrapper[4870]: I0216 17:04:37.872112 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 17:04:37 crc kubenswrapper[4870]: I0216 17:04:37.892241 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 17:04:37 crc kubenswrapper[4870]: I0216 17:04:37.899087 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 17:04:37 crc kubenswrapper[4870]: I0216 17:04:37.930741 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 17:04:37 crc kubenswrapper[4870]: I0216 17:04:37.931665 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.042780 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.045110 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.085898 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.132449 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.134567 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.300151 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.361941 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.481704 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.498753 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.526698 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.527011 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.528370 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.547216 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.594392 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.645112 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh"] Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.659761 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.659861 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.722478 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.746411 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.846827 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh"] Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.849596 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 17:04:38 crc kubenswrapper[4870]: I0216 17:04:38.907652 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.028785 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.089507 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.151152 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.159385 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.285429 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" event={"ID":"237284c6-d0bc-4aa4-8916-2726b7e48497","Type":"ContainerStarted","Data":"9a768785f6f9ab861b57ed629b9cf7f3a9d83371f0597d6226c1696ebe07d325"} Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.285482 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" event={"ID":"237284c6-d0bc-4aa4-8916-2726b7e48497","Type":"ContainerStarted","Data":"84cf5c3e0a1e8f58e1861822d40ce1023ab4fbdd9f39ab1ab8f541019bdef59b"} Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.286298 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.292617 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.306324 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.343010 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" podStartSLOduration=72.342989674 podStartE2EDuration="1m12.342989674s" podCreationTimestamp="2026-02-16 17:03:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:04:39.341552543 +0000 UTC m=+283.825016947" watchObservedRunningTime="2026-02-16 17:04:39.342989674 +0000 UTC m=+283.826454078" Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.395080 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.481460 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.496962 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.544034 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7f5b9fd94b-97pqh" Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.548587 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.549356 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.597203 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.632457 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.642010 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.643046 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.715069 4870 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.715324 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://7acc930c272aa45fb35dabf82915cb8cbf8f2df7b43ed68e1f641679d3553668" gracePeriod=5 Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.901374 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 17:04:39 crc kubenswrapper[4870]: I0216 17:04:39.960143 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 17:04:40 crc kubenswrapper[4870]: I0216 17:04:40.194648 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 17:04:40 crc kubenswrapper[4870]: I0216 17:04:40.313480 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 17:04:40 crc kubenswrapper[4870]: I0216 17:04:40.410799 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 17:04:40 crc kubenswrapper[4870]: I0216 17:04:40.430839 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 17:04:40 crc kubenswrapper[4870]: I0216 17:04:40.435285 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 17:04:40 crc kubenswrapper[4870]: I0216 17:04:40.469288 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 17:04:40 crc kubenswrapper[4870]: I0216 17:04:40.596282 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 17:04:40 crc kubenswrapper[4870]: I0216 17:04:40.607261 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 17:04:40 crc kubenswrapper[4870]: I0216 17:04:40.848117 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 17:04:40 crc kubenswrapper[4870]: I0216 17:04:40.859307 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 17:04:40 crc kubenswrapper[4870]: I0216 17:04:40.863209 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 17:04:40 crc kubenswrapper[4870]: I0216 17:04:40.957457 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 17:04:40 crc kubenswrapper[4870]: I0216 17:04:40.985121 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 17:04:41 crc kubenswrapper[4870]: I0216 17:04:41.036720 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 17:04:41 crc kubenswrapper[4870]: I0216 17:04:41.051545 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 17:04:41 crc kubenswrapper[4870]: I0216 17:04:41.070585 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 17:04:41 crc kubenswrapper[4870]: I0216 17:04:41.274247 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 17:04:41 crc kubenswrapper[4870]: I0216 17:04:41.286472 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 17:04:41 crc kubenswrapper[4870]: I0216 17:04:41.379520 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 17:04:41 crc kubenswrapper[4870]: I0216 17:04:41.408216 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 17:04:41 crc kubenswrapper[4870]: I0216 17:04:41.475076 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 17:04:41 crc kubenswrapper[4870]: I0216 17:04:41.535342 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 17:04:41 crc kubenswrapper[4870]: I0216 17:04:41.871492 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 17:04:42 crc kubenswrapper[4870]: I0216 17:04:42.134665 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 17:04:42 crc kubenswrapper[4870]: I0216 17:04:42.198443 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:04:42 crc kubenswrapper[4870]: I0216 17:04:42.237319 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 17:04:42 crc kubenswrapper[4870]: I0216 17:04:42.338089 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 17:04:42 crc kubenswrapper[4870]: I0216 17:04:42.383144 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 17:04:42 crc kubenswrapper[4870]: I0216 17:04:42.569529 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 17:04:42 crc kubenswrapper[4870]: I0216 17:04:42.660434 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 17:04:43 crc kubenswrapper[4870]: I0216 17:04:43.020858 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 17:04:43 crc kubenswrapper[4870]: I0216 17:04:43.076572 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 17:04:43 crc kubenswrapper[4870]: I0216 17:04:43.077126 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 17:04:43 crc kubenswrapper[4870]: I0216 17:04:43.077461 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 17:04:43 crc kubenswrapper[4870]: I0216 17:04:43.108779 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 17:04:43 crc kubenswrapper[4870]: I0216 17:04:43.216716 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 17:04:43 crc kubenswrapper[4870]: I0216 17:04:43.290275 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 17:04:43 crc kubenswrapper[4870]: I0216 17:04:43.562039 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:04:43 crc kubenswrapper[4870]: I0216 17:04:43.569396 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 17:04:43 crc kubenswrapper[4870]: I0216 17:04:43.834172 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 17:04:44 crc kubenswrapper[4870]: I0216 17:04:44.151823 4870 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 17:04:44 crc kubenswrapper[4870]: I0216 17:04:44.253982 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.282337 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.282785 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.321730 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.321795 4870 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="7acc930c272aa45fb35dabf82915cb8cbf8f2df7b43ed68e1f641679d3553668" exitCode=137 Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.321908 4870 scope.go:117] "RemoveContainer" containerID="7acc930c272aa45fb35dabf82915cb8cbf8f2df7b43ed68e1f641679d3553668" Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.321966 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.340267 4870 scope.go:117] "RemoveContainer" containerID="7acc930c272aa45fb35dabf82915cb8cbf8f2df7b43ed68e1f641679d3553668" Feb 16 17:04:45 crc kubenswrapper[4870]: E0216 17:04:45.340680 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7acc930c272aa45fb35dabf82915cb8cbf8f2df7b43ed68e1f641679d3553668\": container with ID starting with 7acc930c272aa45fb35dabf82915cb8cbf8f2df7b43ed68e1f641679d3553668 not found: ID does not exist" containerID="7acc930c272aa45fb35dabf82915cb8cbf8f2df7b43ed68e1f641679d3553668" Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.340731 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7acc930c272aa45fb35dabf82915cb8cbf8f2df7b43ed68e1f641679d3553668"} err="failed to get container status \"7acc930c272aa45fb35dabf82915cb8cbf8f2df7b43ed68e1f641679d3553668\": rpc error: code = NotFound desc = could not find container \"7acc930c272aa45fb35dabf82915cb8cbf8f2df7b43ed68e1f641679d3553668\": container with ID starting with 7acc930c272aa45fb35dabf82915cb8cbf8f2df7b43ed68e1f641679d3553668 not found: ID does not exist" Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.410293 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.410409 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.410410 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.410458 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.410500 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.410593 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.410610 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.410712 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.410848 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.411122 4870 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.411146 4870 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.411156 4870 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.411167 4870 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.421414 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.444136 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 17:04:45 crc kubenswrapper[4870]: I0216 17:04:45.512475 4870 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 17:04:46 crc kubenswrapper[4870]: I0216 17:04:46.248835 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 16 17:04:56 crc kubenswrapper[4870]: I0216 17:04:56.019273 4870 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 16 17:04:57 crc kubenswrapper[4870]: I0216 17:04:57.399989 4870 generic.go:334] "Generic (PLEG): container finished" podID="d9ed0cdf-88f2-42cd-93e9-22517410ca31" containerID="e960c3f1e997009c49bfe6aeeee46aa2872a57f240ed237f51979827aa6c0f1c" exitCode=0 Feb 16 17:04:57 crc kubenswrapper[4870]: I0216 17:04:57.400106 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" event={"ID":"d9ed0cdf-88f2-42cd-93e9-22517410ca31","Type":"ContainerDied","Data":"e960c3f1e997009c49bfe6aeeee46aa2872a57f240ed237f51979827aa6c0f1c"} Feb 16 17:04:57 crc kubenswrapper[4870]: I0216 17:04:57.401052 4870 scope.go:117] "RemoveContainer" containerID="e960c3f1e997009c49bfe6aeeee46aa2872a57f240ed237f51979827aa6c0f1c" Feb 16 17:04:58 crc kubenswrapper[4870]: I0216 17:04:58.407233 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" event={"ID":"d9ed0cdf-88f2-42cd-93e9-22517410ca31","Type":"ContainerStarted","Data":"b2d296d115598835e03da7902c27d6058d062291004bf60748bcda92f013ca74"} Feb 16 17:04:58 crc kubenswrapper[4870]: I0216 17:04:58.408283 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" Feb 16 17:04:58 crc kubenswrapper[4870]: I0216 17:04:58.409831 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" Feb 16 17:05:03 crc kubenswrapper[4870]: I0216 17:05:03.441630 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 16 17:05:03 crc kubenswrapper[4870]: I0216 17:05:03.445106 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 17:05:03 crc kubenswrapper[4870]: I0216 17:05:03.445179 4870 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9cb85c34b0e11af6824668a7e520c06779fb26a167a8acfd5ad5b668ebc6b99d" exitCode=137 Feb 16 17:05:03 crc kubenswrapper[4870]: I0216 17:05:03.445222 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9cb85c34b0e11af6824668a7e520c06779fb26a167a8acfd5ad5b668ebc6b99d"} Feb 16 17:05:03 crc kubenswrapper[4870]: I0216 17:05:03.445269 4870 scope.go:117] "RemoveContainer" containerID="46bd2a53bedbedb487c017e667286e24c5a9425eb4ecb76e2c9d982a86798291" Feb 16 17:05:04 crc kubenswrapper[4870]: I0216 17:05:04.457040 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 16 17:05:04 crc kubenswrapper[4870]: I0216 17:05:04.458286 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3adc57778a65df9bbd8042d66b6cb862fdd2ade0e861ddc193eca8982e6dbd52"} Feb 16 17:05:06 crc kubenswrapper[4870]: I0216 17:05:06.059216 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 17:05:13 crc kubenswrapper[4870]: I0216 17:05:13.100745 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 17:05:13 crc kubenswrapper[4870]: I0216 17:05:13.108216 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 17:05:13 crc kubenswrapper[4870]: I0216 17:05:13.519423 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 17:05:19 crc kubenswrapper[4870]: I0216 17:05:19.875425 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs"] Feb 16 17:05:19 crc kubenswrapper[4870]: I0216 17:05:19.876315 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" podUID="b0e0ea5e-92af-42e9-9f96-809c376bcc69" containerName="route-controller-manager" containerID="cri-o://c8d02dad44286f91d6de18368332a5cbcc9b7eb186668d27588f59e20f9dd1bf" gracePeriod=30 Feb 16 17:05:19 crc kubenswrapper[4870]: I0216 17:05:19.885177 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c5snp"] Feb 16 17:05:19 crc kubenswrapper[4870]: I0216 17:05:19.885387 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" podUID="88cce487-2f4f-4589-8f7b-f6a1ed6bed2d" containerName="controller-manager" containerID="cri-o://24789dfaa9fef62a4a78e0bdd40e3803f2a529099cd0379ffd4d31068b843da0" gracePeriod=30 Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.320512 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.371097 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.513765 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0e0ea5e-92af-42e9-9f96-809c376bcc69-client-ca\") pod \"b0e0ea5e-92af-42e9-9f96-809c376bcc69\" (UID: \"b0e0ea5e-92af-42e9-9f96-809c376bcc69\") " Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.513903 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0e0ea5e-92af-42e9-9f96-809c376bcc69-serving-cert\") pod \"b0e0ea5e-92af-42e9-9f96-809c376bcc69\" (UID: \"b0e0ea5e-92af-42e9-9f96-809c376bcc69\") " Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.513985 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0e0ea5e-92af-42e9-9f96-809c376bcc69-config\") pod \"b0e0ea5e-92af-42e9-9f96-809c376bcc69\" (UID: \"b0e0ea5e-92af-42e9-9f96-809c376bcc69\") " Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.514032 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-client-ca\") pod \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\" (UID: \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\") " Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.514079 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d2s9\" (UniqueName: \"kubernetes.io/projected/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-kube-api-access-2d2s9\") pod \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\" (UID: \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\") " Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.514119 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-proxy-ca-bundles\") pod \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\" (UID: \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\") " Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.514169 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-serving-cert\") pod \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\" (UID: \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\") " Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.514220 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-config\") pod \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\" (UID: \"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d\") " Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.514278 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdlhm\" (UniqueName: \"kubernetes.io/projected/b0e0ea5e-92af-42e9-9f96-809c376bcc69-kube-api-access-mdlhm\") pod \"b0e0ea5e-92af-42e9-9f96-809c376bcc69\" (UID: \"b0e0ea5e-92af-42e9-9f96-809c376bcc69\") " Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.514997 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0e0ea5e-92af-42e9-9f96-809c376bcc69-client-ca" (OuterVolumeSpecName: "client-ca") pod "b0e0ea5e-92af-42e9-9f96-809c376bcc69" (UID: "b0e0ea5e-92af-42e9-9f96-809c376bcc69"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.515192 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "88cce487-2f4f-4589-8f7b-f6a1ed6bed2d" (UID: "88cce487-2f4f-4589-8f7b-f6a1ed6bed2d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.515228 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-client-ca" (OuterVolumeSpecName: "client-ca") pod "88cce487-2f4f-4589-8f7b-f6a1ed6bed2d" (UID: "88cce487-2f4f-4589-8f7b-f6a1ed6bed2d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.515439 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-config" (OuterVolumeSpecName: "config") pod "88cce487-2f4f-4589-8f7b-f6a1ed6bed2d" (UID: "88cce487-2f4f-4589-8f7b-f6a1ed6bed2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.516027 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0e0ea5e-92af-42e9-9f96-809c376bcc69-config" (OuterVolumeSpecName: "config") pod "b0e0ea5e-92af-42e9-9f96-809c376bcc69" (UID: "b0e0ea5e-92af-42e9-9f96-809c376bcc69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.516739 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0e0ea5e-92af-42e9-9f96-809c376bcc69-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.516761 4870 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.516773 4870 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.516785 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.516794 4870 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0e0ea5e-92af-42e9-9f96-809c376bcc69-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.520547 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "88cce487-2f4f-4589-8f7b-f6a1ed6bed2d" (UID: "88cce487-2f4f-4589-8f7b-f6a1ed6bed2d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.520752 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0e0ea5e-92af-42e9-9f96-809c376bcc69-kube-api-access-mdlhm" (OuterVolumeSpecName: "kube-api-access-mdlhm") pod "b0e0ea5e-92af-42e9-9f96-809c376bcc69" (UID: "b0e0ea5e-92af-42e9-9f96-809c376bcc69"). InnerVolumeSpecName "kube-api-access-mdlhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.520766 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-kube-api-access-2d2s9" (OuterVolumeSpecName: "kube-api-access-2d2s9") pod "88cce487-2f4f-4589-8f7b-f6a1ed6bed2d" (UID: "88cce487-2f4f-4589-8f7b-f6a1ed6bed2d"). InnerVolumeSpecName "kube-api-access-2d2s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.521762 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0e0ea5e-92af-42e9-9f96-809c376bcc69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b0e0ea5e-92af-42e9-9f96-809c376bcc69" (UID: "b0e0ea5e-92af-42e9-9f96-809c376bcc69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.556263 4870 generic.go:334] "Generic (PLEG): container finished" podID="88cce487-2f4f-4589-8f7b-f6a1ed6bed2d" containerID="24789dfaa9fef62a4a78e0bdd40e3803f2a529099cd0379ffd4d31068b843da0" exitCode=0 Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.556329 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.556317 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" event={"ID":"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d","Type":"ContainerDied","Data":"24789dfaa9fef62a4a78e0bdd40e3803f2a529099cd0379ffd4d31068b843da0"} Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.556529 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c5snp" event={"ID":"88cce487-2f4f-4589-8f7b-f6a1ed6bed2d","Type":"ContainerDied","Data":"c822ca7eed026140734ecef386b1cdc55af8b87d6cdffac1030b12e4bde4f9ae"} Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.556556 4870 scope.go:117] "RemoveContainer" containerID="24789dfaa9fef62a4a78e0bdd40e3803f2a529099cd0379ffd4d31068b843da0" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.558180 4870 generic.go:334] "Generic (PLEG): container finished" podID="b0e0ea5e-92af-42e9-9f96-809c376bcc69" containerID="c8d02dad44286f91d6de18368332a5cbcc9b7eb186668d27588f59e20f9dd1bf" exitCode=0 Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.558216 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" event={"ID":"b0e0ea5e-92af-42e9-9f96-809c376bcc69","Type":"ContainerDied","Data":"c8d02dad44286f91d6de18368332a5cbcc9b7eb186668d27588f59e20f9dd1bf"} Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.558235 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" event={"ID":"b0e0ea5e-92af-42e9-9f96-809c376bcc69","Type":"ContainerDied","Data":"3315badcfce7ccb872ce422f491ec0df7a133f83974841f9af89e5e0924ff485"} Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.558321 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.585678 4870 scope.go:117] "RemoveContainer" containerID="24789dfaa9fef62a4a78e0bdd40e3803f2a529099cd0379ffd4d31068b843da0" Feb 16 17:05:20 crc kubenswrapper[4870]: E0216 17:05:20.586338 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24789dfaa9fef62a4a78e0bdd40e3803f2a529099cd0379ffd4d31068b843da0\": container with ID starting with 24789dfaa9fef62a4a78e0bdd40e3803f2a529099cd0379ffd4d31068b843da0 not found: ID does not exist" containerID="24789dfaa9fef62a4a78e0bdd40e3803f2a529099cd0379ffd4d31068b843da0" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.586411 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24789dfaa9fef62a4a78e0bdd40e3803f2a529099cd0379ffd4d31068b843da0"} err="failed to get container status \"24789dfaa9fef62a4a78e0bdd40e3803f2a529099cd0379ffd4d31068b843da0\": rpc error: code = NotFound desc = could not find container \"24789dfaa9fef62a4a78e0bdd40e3803f2a529099cd0379ffd4d31068b843da0\": container with ID starting with 24789dfaa9fef62a4a78e0bdd40e3803f2a529099cd0379ffd4d31068b843da0 not found: ID does not exist" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.586478 4870 scope.go:117] "RemoveContainer" containerID="c8d02dad44286f91d6de18368332a5cbcc9b7eb186668d27588f59e20f9dd1bf" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.599121 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c5snp"] Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.605283 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c5snp"] Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.605569 4870 scope.go:117] "RemoveContainer" containerID="c8d02dad44286f91d6de18368332a5cbcc9b7eb186668d27588f59e20f9dd1bf" Feb 16 17:05:20 crc kubenswrapper[4870]: E0216 17:05:20.605972 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8d02dad44286f91d6de18368332a5cbcc9b7eb186668d27588f59e20f9dd1bf\": container with ID starting with c8d02dad44286f91d6de18368332a5cbcc9b7eb186668d27588f59e20f9dd1bf not found: ID does not exist" containerID="c8d02dad44286f91d6de18368332a5cbcc9b7eb186668d27588f59e20f9dd1bf" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.606006 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8d02dad44286f91d6de18368332a5cbcc9b7eb186668d27588f59e20f9dd1bf"} err="failed to get container status \"c8d02dad44286f91d6de18368332a5cbcc9b7eb186668d27588f59e20f9dd1bf\": rpc error: code = NotFound desc = could not find container \"c8d02dad44286f91d6de18368332a5cbcc9b7eb186668d27588f59e20f9dd1bf\": container with ID starting with c8d02dad44286f91d6de18368332a5cbcc9b7eb186668d27588f59e20f9dd1bf not found: ID does not exist" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.611430 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs"] Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.614443 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6xvs"] Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.617710 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0e0ea5e-92af-42e9-9f96-809c376bcc69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.617736 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d2s9\" (UniqueName: \"kubernetes.io/projected/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-kube-api-access-2d2s9\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.617746 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:20 crc kubenswrapper[4870]: I0216 17:05:20.617754 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdlhm\" (UniqueName: \"kubernetes.io/projected/b0e0ea5e-92af-42e9-9f96-809c376bcc69-kube-api-access-mdlhm\") on node \"crc\" DevicePath \"\"" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.308861 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg"] Feb 16 17:05:21 crc kubenswrapper[4870]: E0216 17:05:21.309201 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0e0ea5e-92af-42e9-9f96-809c376bcc69" containerName="route-controller-manager" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.309218 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0e0ea5e-92af-42e9-9f96-809c376bcc69" containerName="route-controller-manager" Feb 16 17:05:21 crc kubenswrapper[4870]: E0216 17:05:21.309234 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.309242 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 17:05:21 crc kubenswrapper[4870]: E0216 17:05:21.309267 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88cce487-2f4f-4589-8f7b-f6a1ed6bed2d" containerName="controller-manager" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.309276 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="88cce487-2f4f-4589-8f7b-f6a1ed6bed2d" containerName="controller-manager" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.309404 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="88cce487-2f4f-4589-8f7b-f6a1ed6bed2d" containerName="controller-manager" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.309418 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0e0ea5e-92af-42e9-9f96-809c376bcc69" containerName="route-controller-manager" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.309427 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.309824 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.313027 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd"] Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.313403 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.313587 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.313748 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.313791 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.314427 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.315111 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.315369 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.316666 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.316680 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.317047 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.317077 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.321666 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.321820 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.324923 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.327361 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg"] Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.351290 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd"] Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.434847 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0122c2c-828f-4794-80be-d945f512ae6b-client-ca\") pod \"controller-manager-66cc94c8b8-vqpwd\" (UID: \"e0122c2c-828f-4794-80be-d945f512ae6b\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.434902 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0122c2c-828f-4794-80be-d945f512ae6b-config\") pod \"controller-manager-66cc94c8b8-vqpwd\" (UID: \"e0122c2c-828f-4794-80be-d945f512ae6b\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.435303 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e2a4685-5886-46d7-af50-d625060c822a-config\") pod \"route-controller-manager-7b895dcf8-dm2pg\" (UID: \"0e2a4685-5886-46d7-af50-d625060c822a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.435362 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e2a4685-5886-46d7-af50-d625060c822a-client-ca\") pod \"route-controller-manager-7b895dcf8-dm2pg\" (UID: \"0e2a4685-5886-46d7-af50-d625060c822a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.435430 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0122c2c-828f-4794-80be-d945f512ae6b-proxy-ca-bundles\") pod \"controller-manager-66cc94c8b8-vqpwd\" (UID: \"e0122c2c-828f-4794-80be-d945f512ae6b\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.435460 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9shh8\" (UniqueName: \"kubernetes.io/projected/e0122c2c-828f-4794-80be-d945f512ae6b-kube-api-access-9shh8\") pod \"controller-manager-66cc94c8b8-vqpwd\" (UID: \"e0122c2c-828f-4794-80be-d945f512ae6b\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.435600 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e2a4685-5886-46d7-af50-d625060c822a-serving-cert\") pod \"route-controller-manager-7b895dcf8-dm2pg\" (UID: \"0e2a4685-5886-46d7-af50-d625060c822a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.435636 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-866vl\" (UniqueName: \"kubernetes.io/projected/0e2a4685-5886-46d7-af50-d625060c822a-kube-api-access-866vl\") pod \"route-controller-manager-7b895dcf8-dm2pg\" (UID: \"0e2a4685-5886-46d7-af50-d625060c822a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.435660 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0122c2c-828f-4794-80be-d945f512ae6b-serving-cert\") pod \"controller-manager-66cc94c8b8-vqpwd\" (UID: \"e0122c2c-828f-4794-80be-d945f512ae6b\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.537124 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e2a4685-5886-46d7-af50-d625060c822a-serving-cert\") pod \"route-controller-manager-7b895dcf8-dm2pg\" (UID: \"0e2a4685-5886-46d7-af50-d625060c822a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.537184 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-866vl\" (UniqueName: \"kubernetes.io/projected/0e2a4685-5886-46d7-af50-d625060c822a-kube-api-access-866vl\") pod \"route-controller-manager-7b895dcf8-dm2pg\" (UID: \"0e2a4685-5886-46d7-af50-d625060c822a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.537210 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0122c2c-828f-4794-80be-d945f512ae6b-serving-cert\") pod \"controller-manager-66cc94c8b8-vqpwd\" (UID: \"e0122c2c-828f-4794-80be-d945f512ae6b\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.537229 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0122c2c-828f-4794-80be-d945f512ae6b-client-ca\") pod \"controller-manager-66cc94c8b8-vqpwd\" (UID: \"e0122c2c-828f-4794-80be-d945f512ae6b\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.537255 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0122c2c-828f-4794-80be-d945f512ae6b-config\") pod \"controller-manager-66cc94c8b8-vqpwd\" (UID: \"e0122c2c-828f-4794-80be-d945f512ae6b\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.537283 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e2a4685-5886-46d7-af50-d625060c822a-config\") pod \"route-controller-manager-7b895dcf8-dm2pg\" (UID: \"0e2a4685-5886-46d7-af50-d625060c822a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.537306 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e2a4685-5886-46d7-af50-d625060c822a-client-ca\") pod \"route-controller-manager-7b895dcf8-dm2pg\" (UID: \"0e2a4685-5886-46d7-af50-d625060c822a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.537328 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0122c2c-828f-4794-80be-d945f512ae6b-proxy-ca-bundles\") pod \"controller-manager-66cc94c8b8-vqpwd\" (UID: \"e0122c2c-828f-4794-80be-d945f512ae6b\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.537350 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9shh8\" (UniqueName: \"kubernetes.io/projected/e0122c2c-828f-4794-80be-d945f512ae6b-kube-api-access-9shh8\") pod \"controller-manager-66cc94c8b8-vqpwd\" (UID: \"e0122c2c-828f-4794-80be-d945f512ae6b\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.538982 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e2a4685-5886-46d7-af50-d625060c822a-client-ca\") pod \"route-controller-manager-7b895dcf8-dm2pg\" (UID: \"0e2a4685-5886-46d7-af50-d625060c822a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.538982 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0122c2c-828f-4794-80be-d945f512ae6b-client-ca\") pod \"controller-manager-66cc94c8b8-vqpwd\" (UID: \"e0122c2c-828f-4794-80be-d945f512ae6b\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.539281 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e2a4685-5886-46d7-af50-d625060c822a-config\") pod \"route-controller-manager-7b895dcf8-dm2pg\" (UID: \"0e2a4685-5886-46d7-af50-d625060c822a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.540251 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0122c2c-828f-4794-80be-d945f512ae6b-proxy-ca-bundles\") pod \"controller-manager-66cc94c8b8-vqpwd\" (UID: \"e0122c2c-828f-4794-80be-d945f512ae6b\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.540373 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0122c2c-828f-4794-80be-d945f512ae6b-config\") pod \"controller-manager-66cc94c8b8-vqpwd\" (UID: \"e0122c2c-828f-4794-80be-d945f512ae6b\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.541208 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e2a4685-5886-46d7-af50-d625060c822a-serving-cert\") pod \"route-controller-manager-7b895dcf8-dm2pg\" (UID: \"0e2a4685-5886-46d7-af50-d625060c822a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.542501 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0122c2c-828f-4794-80be-d945f512ae6b-serving-cert\") pod \"controller-manager-66cc94c8b8-vqpwd\" (UID: \"e0122c2c-828f-4794-80be-d945f512ae6b\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.555572 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9shh8\" (UniqueName: \"kubernetes.io/projected/e0122c2c-828f-4794-80be-d945f512ae6b-kube-api-access-9shh8\") pod \"controller-manager-66cc94c8b8-vqpwd\" (UID: \"e0122c2c-828f-4794-80be-d945f512ae6b\") " pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.557102 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-866vl\" (UniqueName: \"kubernetes.io/projected/0e2a4685-5886-46d7-af50-d625060c822a-kube-api-access-866vl\") pod \"route-controller-manager-7b895dcf8-dm2pg\" (UID: \"0e2a4685-5886-46d7-af50-d625060c822a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.632967 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.642429 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.837095 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg"] Feb 16 17:05:21 crc kubenswrapper[4870]: W0216 17:05:21.842752 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e2a4685_5886_46d7_af50_d625060c822a.slice/crio-ce59310d2ee91fe559042842993fc9889b9f43a740e587c957a76e3a59005e85 WatchSource:0}: Error finding container ce59310d2ee91fe559042842993fc9889b9f43a740e587c957a76e3a59005e85: Status 404 returned error can't find the container with id ce59310d2ee91fe559042842993fc9889b9f43a740e587c957a76e3a59005e85 Feb 16 17:05:21 crc kubenswrapper[4870]: I0216 17:05:21.882606 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd"] Feb 16 17:05:21 crc kubenswrapper[4870]: W0216 17:05:21.895336 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0122c2c_828f_4794_80be_d945f512ae6b.slice/crio-1dc7a1ccbcea3b74f7d569c7b9c4a2a81ea066b47fcdedc8195fc50f560950dc WatchSource:0}: Error finding container 1dc7a1ccbcea3b74f7d569c7b9c4a2a81ea066b47fcdedc8195fc50f560950dc: Status 404 returned error can't find the container with id 1dc7a1ccbcea3b74f7d569c7b9c4a2a81ea066b47fcdedc8195fc50f560950dc Feb 16 17:05:22 crc kubenswrapper[4870]: I0216 17:05:22.229733 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88cce487-2f4f-4589-8f7b-f6a1ed6bed2d" path="/var/lib/kubelet/pods/88cce487-2f4f-4589-8f7b-f6a1ed6bed2d/volumes" Feb 16 17:05:22 crc kubenswrapper[4870]: I0216 17:05:22.230844 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0e0ea5e-92af-42e9-9f96-809c376bcc69" path="/var/lib/kubelet/pods/b0e0ea5e-92af-42e9-9f96-809c376bcc69/volumes" Feb 16 17:05:22 crc kubenswrapper[4870]: I0216 17:05:22.573699 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" event={"ID":"e0122c2c-828f-4794-80be-d945f512ae6b","Type":"ContainerStarted","Data":"518e743bc624ef2f44e10a703aeadd752f57757b2b1699a7880c9361671852ec"} Feb 16 17:05:22 crc kubenswrapper[4870]: I0216 17:05:22.573774 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" event={"ID":"e0122c2c-828f-4794-80be-d945f512ae6b","Type":"ContainerStarted","Data":"1dc7a1ccbcea3b74f7d569c7b9c4a2a81ea066b47fcdedc8195fc50f560950dc"} Feb 16 17:05:22 crc kubenswrapper[4870]: I0216 17:05:22.573903 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" Feb 16 17:05:22 crc kubenswrapper[4870]: I0216 17:05:22.577285 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" event={"ID":"0e2a4685-5886-46d7-af50-d625060c822a","Type":"ContainerStarted","Data":"81b4236e8fce67167364f5f35a0fbafcab6621ae90ab21afeaa06c823a0f35d0"} Feb 16 17:05:22 crc kubenswrapper[4870]: I0216 17:05:22.577334 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" event={"ID":"0e2a4685-5886-46d7-af50-d625060c822a","Type":"ContainerStarted","Data":"ce59310d2ee91fe559042842993fc9889b9f43a740e587c957a76e3a59005e85"} Feb 16 17:05:22 crc kubenswrapper[4870]: I0216 17:05:22.577610 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" Feb 16 17:05:22 crc kubenswrapper[4870]: I0216 17:05:22.582642 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" Feb 16 17:05:22 crc kubenswrapper[4870]: I0216 17:05:22.584979 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" Feb 16 17:05:22 crc kubenswrapper[4870]: I0216 17:05:22.614683 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" podStartSLOduration=3.614662677 podStartE2EDuration="3.614662677s" podCreationTimestamp="2026-02-16 17:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:05:22.594156478 +0000 UTC m=+327.077620862" watchObservedRunningTime="2026-02-16 17:05:22.614662677 +0000 UTC m=+327.098127061" Feb 16 17:05:22 crc kubenswrapper[4870]: I0216 17:05:22.633446 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" podStartSLOduration=2.633429205 podStartE2EDuration="2.633429205s" podCreationTimestamp="2026-02-16 17:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:05:22.632152849 +0000 UTC m=+327.115617253" watchObservedRunningTime="2026-02-16 17:05:22.633429205 +0000 UTC m=+327.116893589" Feb 16 17:05:35 crc kubenswrapper[4870]: I0216 17:05:35.366360 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:05:35 crc kubenswrapper[4870]: I0216 17:05:35.366966 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:06:05 crc kubenswrapper[4870]: I0216 17:06:05.367279 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:06:05 crc kubenswrapper[4870]: I0216 17:06:05.368009 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.441405 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cvzvh"] Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.443112 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.456572 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cvzvh"] Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.611433 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfbkm\" (UniqueName: \"kubernetes.io/projected/29a8e249-44c8-468d-acfe-fc914e4de542-kube-api-access-nfbkm\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.611685 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.611794 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/29a8e249-44c8-468d-acfe-fc914e4de542-bound-sa-token\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.611860 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/29a8e249-44c8-468d-acfe-fc914e4de542-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.611931 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29a8e249-44c8-468d-acfe-fc914e4de542-trusted-ca\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.612074 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/29a8e249-44c8-468d-acfe-fc914e4de542-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.612190 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/29a8e249-44c8-468d-acfe-fc914e4de542-registry-tls\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.612268 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/29a8e249-44c8-468d-acfe-fc914e4de542-registry-certificates\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.634081 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.714018 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/29a8e249-44c8-468d-acfe-fc914e4de542-bound-sa-token\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.714347 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/29a8e249-44c8-468d-acfe-fc914e4de542-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.714453 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29a8e249-44c8-468d-acfe-fc914e4de542-trusted-ca\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.714549 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/29a8e249-44c8-468d-acfe-fc914e4de542-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.714652 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/29a8e249-44c8-468d-acfe-fc914e4de542-registry-tls\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.714762 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/29a8e249-44c8-468d-acfe-fc914e4de542-registry-certificates\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.714864 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfbkm\" (UniqueName: \"kubernetes.io/projected/29a8e249-44c8-468d-acfe-fc914e4de542-kube-api-access-nfbkm\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.715411 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/29a8e249-44c8-468d-acfe-fc914e4de542-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.716282 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/29a8e249-44c8-468d-acfe-fc914e4de542-registry-certificates\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.716827 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29a8e249-44c8-468d-acfe-fc914e4de542-trusted-ca\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.720101 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/29a8e249-44c8-468d-acfe-fc914e4de542-registry-tls\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.729475 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/29a8e249-44c8-468d-acfe-fc914e4de542-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.734079 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/29a8e249-44c8-468d-acfe-fc914e4de542-bound-sa-token\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.734434 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfbkm\" (UniqueName: \"kubernetes.io/projected/29a8e249-44c8-468d-acfe-fc914e4de542-kube-api-access-nfbkm\") pod \"image-registry-66df7c8f76-cvzvh\" (UID: \"29a8e249-44c8-468d-acfe-fc914e4de542\") " pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:08 crc kubenswrapper[4870]: I0216 17:06:08.763858 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:09 crc kubenswrapper[4870]: I0216 17:06:09.159241 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cvzvh"] Feb 16 17:06:09 crc kubenswrapper[4870]: I0216 17:06:09.865891 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" event={"ID":"29a8e249-44c8-468d-acfe-fc914e4de542","Type":"ContainerStarted","Data":"f8e7098c2166df2b635d65809ea6f4986f3dfcab3798202f30772a93ae515be3"} Feb 16 17:06:09 crc kubenswrapper[4870]: I0216 17:06:09.866275 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" event={"ID":"29a8e249-44c8-468d-acfe-fc914e4de542","Type":"ContainerStarted","Data":"0b991b3e6ec13a0e795bef8260913081828714b7e516939cec3aee5161cad0d2"} Feb 16 17:06:09 crc kubenswrapper[4870]: I0216 17:06:09.866302 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:09 crc kubenswrapper[4870]: I0216 17:06:09.890031 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" podStartSLOduration=1.890011587 podStartE2EDuration="1.890011587s" podCreationTimestamp="2026-02-16 17:06:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:06:09.888003548 +0000 UTC m=+374.371467942" watchObservedRunningTime="2026-02-16 17:06:09.890011587 +0000 UTC m=+374.373475981" Feb 16 17:06:16 crc kubenswrapper[4870]: I0216 17:06:16.513097 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd"] Feb 16 17:06:16 crc kubenswrapper[4870]: I0216 17:06:16.513327 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" podUID="e0122c2c-828f-4794-80be-d945f512ae6b" containerName="controller-manager" containerID="cri-o://518e743bc624ef2f44e10a703aeadd752f57757b2b1699a7880c9361671852ec" gracePeriod=30 Feb 16 17:06:16 crc kubenswrapper[4870]: I0216 17:06:16.615804 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg"] Feb 16 17:06:16 crc kubenswrapper[4870]: I0216 17:06:16.616055 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" podUID="0e2a4685-5886-46d7-af50-d625060c822a" containerName="route-controller-manager" containerID="cri-o://81b4236e8fce67167364f5f35a0fbafcab6621ae90ab21afeaa06c823a0f35d0" gracePeriod=30 Feb 16 17:06:16 crc kubenswrapper[4870]: I0216 17:06:16.911505 4870 generic.go:334] "Generic (PLEG): container finished" podID="0e2a4685-5886-46d7-af50-d625060c822a" containerID="81b4236e8fce67167364f5f35a0fbafcab6621ae90ab21afeaa06c823a0f35d0" exitCode=0 Feb 16 17:06:16 crc kubenswrapper[4870]: I0216 17:06:16.911569 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" event={"ID":"0e2a4685-5886-46d7-af50-d625060c822a","Type":"ContainerDied","Data":"81b4236e8fce67167364f5f35a0fbafcab6621ae90ab21afeaa06c823a0f35d0"} Feb 16 17:06:16 crc kubenswrapper[4870]: I0216 17:06:16.915584 4870 generic.go:334] "Generic (PLEG): container finished" podID="e0122c2c-828f-4794-80be-d945f512ae6b" containerID="518e743bc624ef2f44e10a703aeadd752f57757b2b1699a7880c9361671852ec" exitCode=0 Feb 16 17:06:16 crc kubenswrapper[4870]: I0216 17:06:16.915638 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" event={"ID":"e0122c2c-828f-4794-80be-d945f512ae6b","Type":"ContainerDied","Data":"518e743bc624ef2f44e10a703aeadd752f57757b2b1699a7880c9361671852ec"} Feb 16 17:06:16 crc kubenswrapper[4870]: I0216 17:06:16.915667 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" event={"ID":"e0122c2c-828f-4794-80be-d945f512ae6b","Type":"ContainerDied","Data":"1dc7a1ccbcea3b74f7d569c7b9c4a2a81ea066b47fcdedc8195fc50f560950dc"} Feb 16 17:06:16 crc kubenswrapper[4870]: I0216 17:06:16.915681 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dc7a1ccbcea3b74f7d569c7b9c4a2a81ea066b47fcdedc8195fc50f560950dc" Feb 16 17:06:16 crc kubenswrapper[4870]: I0216 17:06:16.925365 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.039593 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.061986 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9shh8\" (UniqueName: \"kubernetes.io/projected/e0122c2c-828f-4794-80be-d945f512ae6b-kube-api-access-9shh8\") pod \"e0122c2c-828f-4794-80be-d945f512ae6b\" (UID: \"e0122c2c-828f-4794-80be-d945f512ae6b\") " Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.062046 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0122c2c-828f-4794-80be-d945f512ae6b-proxy-ca-bundles\") pod \"e0122c2c-828f-4794-80be-d945f512ae6b\" (UID: \"e0122c2c-828f-4794-80be-d945f512ae6b\") " Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.062139 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0122c2c-828f-4794-80be-d945f512ae6b-serving-cert\") pod \"e0122c2c-828f-4794-80be-d945f512ae6b\" (UID: \"e0122c2c-828f-4794-80be-d945f512ae6b\") " Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.062174 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0122c2c-828f-4794-80be-d945f512ae6b-config\") pod \"e0122c2c-828f-4794-80be-d945f512ae6b\" (UID: \"e0122c2c-828f-4794-80be-d945f512ae6b\") " Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.062207 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0122c2c-828f-4794-80be-d945f512ae6b-client-ca\") pod \"e0122c2c-828f-4794-80be-d945f512ae6b\" (UID: \"e0122c2c-828f-4794-80be-d945f512ae6b\") " Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.063360 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0122c2c-828f-4794-80be-d945f512ae6b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e0122c2c-828f-4794-80be-d945f512ae6b" (UID: "e0122c2c-828f-4794-80be-d945f512ae6b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.063624 4870 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0122c2c-828f-4794-80be-d945f512ae6b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.064418 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0122c2c-828f-4794-80be-d945f512ae6b-client-ca" (OuterVolumeSpecName: "client-ca") pod "e0122c2c-828f-4794-80be-d945f512ae6b" (UID: "e0122c2c-828f-4794-80be-d945f512ae6b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.064527 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0122c2c-828f-4794-80be-d945f512ae6b-config" (OuterVolumeSpecName: "config") pod "e0122c2c-828f-4794-80be-d945f512ae6b" (UID: "e0122c2c-828f-4794-80be-d945f512ae6b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.070383 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0122c2c-828f-4794-80be-d945f512ae6b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e0122c2c-828f-4794-80be-d945f512ae6b" (UID: "e0122c2c-828f-4794-80be-d945f512ae6b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.071430 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0122c2c-828f-4794-80be-d945f512ae6b-kube-api-access-9shh8" (OuterVolumeSpecName: "kube-api-access-9shh8") pod "e0122c2c-828f-4794-80be-d945f512ae6b" (UID: "e0122c2c-828f-4794-80be-d945f512ae6b"). InnerVolumeSpecName "kube-api-access-9shh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.164285 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e2a4685-5886-46d7-af50-d625060c822a-config\") pod \"0e2a4685-5886-46d7-af50-d625060c822a\" (UID: \"0e2a4685-5886-46d7-af50-d625060c822a\") " Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.164346 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-866vl\" (UniqueName: \"kubernetes.io/projected/0e2a4685-5886-46d7-af50-d625060c822a-kube-api-access-866vl\") pod \"0e2a4685-5886-46d7-af50-d625060c822a\" (UID: \"0e2a4685-5886-46d7-af50-d625060c822a\") " Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.164429 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e2a4685-5886-46d7-af50-d625060c822a-serving-cert\") pod \"0e2a4685-5886-46d7-af50-d625060c822a\" (UID: \"0e2a4685-5886-46d7-af50-d625060c822a\") " Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.164514 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e2a4685-5886-46d7-af50-d625060c822a-client-ca\") pod \"0e2a4685-5886-46d7-af50-d625060c822a\" (UID: \"0e2a4685-5886-46d7-af50-d625060c822a\") " Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.164751 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9shh8\" (UniqueName: \"kubernetes.io/projected/e0122c2c-828f-4794-80be-d945f512ae6b-kube-api-access-9shh8\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.164769 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0122c2c-828f-4794-80be-d945f512ae6b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.164778 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0122c2c-828f-4794-80be-d945f512ae6b-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.164788 4870 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0122c2c-828f-4794-80be-d945f512ae6b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.165375 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e2a4685-5886-46d7-af50-d625060c822a-config" (OuterVolumeSpecName: "config") pod "0e2a4685-5886-46d7-af50-d625060c822a" (UID: "0e2a4685-5886-46d7-af50-d625060c822a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.165443 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e2a4685-5886-46d7-af50-d625060c822a-client-ca" (OuterVolumeSpecName: "client-ca") pod "0e2a4685-5886-46d7-af50-d625060c822a" (UID: "0e2a4685-5886-46d7-af50-d625060c822a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.168116 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e2a4685-5886-46d7-af50-d625060c822a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0e2a4685-5886-46d7-af50-d625060c822a" (UID: "0e2a4685-5886-46d7-af50-d625060c822a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.168640 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e2a4685-5886-46d7-af50-d625060c822a-kube-api-access-866vl" (OuterVolumeSpecName: "kube-api-access-866vl") pod "0e2a4685-5886-46d7-af50-d625060c822a" (UID: "0e2a4685-5886-46d7-af50-d625060c822a"). InnerVolumeSpecName "kube-api-access-866vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.265504 4870 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e2a4685-5886-46d7-af50-d625060c822a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.265561 4870 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e2a4685-5886-46d7-af50-d625060c822a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.265571 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-866vl\" (UniqueName: \"kubernetes.io/projected/0e2a4685-5886-46d7-af50-d625060c822a-kube-api-access-866vl\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.265585 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e2a4685-5886-46d7-af50-d625060c822a-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.934693 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" event={"ID":"0e2a4685-5886-46d7-af50-d625060c822a","Type":"ContainerDied","Data":"ce59310d2ee91fe559042842993fc9889b9f43a740e587c957a76e3a59005e85"} Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.934704 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.934704 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.934786 4870 scope.go:117] "RemoveContainer" containerID="81b4236e8fce67167364f5f35a0fbafcab6621ae90ab21afeaa06c823a0f35d0" Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.987795 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd"] Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.994540 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-66cc94c8b8-vqpwd"] Feb 16 17:06:17 crc kubenswrapper[4870]: I0216 17:06:17.999151 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg"] Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.003750 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-dm2pg"] Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.239842 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e2a4685-5886-46d7-af50-d625060c822a" path="/var/lib/kubelet/pods/0e2a4685-5886-46d7-af50-d625060c822a/volumes" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.241100 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0122c2c-828f-4794-80be-d945f512ae6b" path="/var/lib/kubelet/pods/e0122c2c-828f-4794-80be-d945f512ae6b/volumes" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.352609 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb"] Feb 16 17:06:18 crc kubenswrapper[4870]: E0216 17:06:18.353257 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0122c2c-828f-4794-80be-d945f512ae6b" containerName="controller-manager" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.353272 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0122c2c-828f-4794-80be-d945f512ae6b" containerName="controller-manager" Feb 16 17:06:18 crc kubenswrapper[4870]: E0216 17:06:18.353290 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e2a4685-5886-46d7-af50-d625060c822a" containerName="route-controller-manager" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.353298 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e2a4685-5886-46d7-af50-d625060c822a" containerName="route-controller-manager" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.353415 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e2a4685-5886-46d7-af50-d625060c822a" containerName="route-controller-manager" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.353437 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0122c2c-828f-4794-80be-d945f512ae6b" containerName="controller-manager" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.353866 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.356067 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.356233 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.357113 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-75694dbb5f-kdngs"] Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.357768 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.359616 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.359904 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.360528 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.360771 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.361578 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.363414 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.364224 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.364754 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.366304 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.366533 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.370332 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.376008 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb"] Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.380777 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75694dbb5f-kdngs"] Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.381579 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2f54461-90e7-4a7e-9835-6a9fb3b79499-proxy-ca-bundles\") pod \"controller-manager-75694dbb5f-kdngs\" (UID: \"b2f54461-90e7-4a7e-9835-6a9fb3b79499\") " pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.381637 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsskt\" (UniqueName: \"kubernetes.io/projected/b2f54461-90e7-4a7e-9835-6a9fb3b79499-kube-api-access-dsskt\") pod \"controller-manager-75694dbb5f-kdngs\" (UID: \"b2f54461-90e7-4a7e-9835-6a9fb3b79499\") " pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.381669 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f8645a-b8e4-4de0-84da-da9bd3c8124f-config\") pod \"route-controller-manager-6d44f7fc68-b59kb\" (UID: \"03f8645a-b8e4-4de0-84da-da9bd3c8124f\") " pod="openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.381696 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl6k7\" (UniqueName: \"kubernetes.io/projected/03f8645a-b8e4-4de0-84da-da9bd3c8124f-kube-api-access-xl6k7\") pod \"route-controller-manager-6d44f7fc68-b59kb\" (UID: \"03f8645a-b8e4-4de0-84da-da9bd3c8124f\") " pod="openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.381724 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2f54461-90e7-4a7e-9835-6a9fb3b79499-config\") pod \"controller-manager-75694dbb5f-kdngs\" (UID: \"b2f54461-90e7-4a7e-9835-6a9fb3b79499\") " pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.381755 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2f54461-90e7-4a7e-9835-6a9fb3b79499-serving-cert\") pod \"controller-manager-75694dbb5f-kdngs\" (UID: \"b2f54461-90e7-4a7e-9835-6a9fb3b79499\") " pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.381782 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03f8645a-b8e4-4de0-84da-da9bd3c8124f-serving-cert\") pod \"route-controller-manager-6d44f7fc68-b59kb\" (UID: \"03f8645a-b8e4-4de0-84da-da9bd3c8124f\") " pod="openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.381811 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2f54461-90e7-4a7e-9835-6a9fb3b79499-client-ca\") pod \"controller-manager-75694dbb5f-kdngs\" (UID: \"b2f54461-90e7-4a7e-9835-6a9fb3b79499\") " pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.381844 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03f8645a-b8e4-4de0-84da-da9bd3c8124f-client-ca\") pod \"route-controller-manager-6d44f7fc68-b59kb\" (UID: \"03f8645a-b8e4-4de0-84da-da9bd3c8124f\") " pod="openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.482661 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2f54461-90e7-4a7e-9835-6a9fb3b79499-client-ca\") pod \"controller-manager-75694dbb5f-kdngs\" (UID: \"b2f54461-90e7-4a7e-9835-6a9fb3b79499\") " pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.482728 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03f8645a-b8e4-4de0-84da-da9bd3c8124f-client-ca\") pod \"route-controller-manager-6d44f7fc68-b59kb\" (UID: \"03f8645a-b8e4-4de0-84da-da9bd3c8124f\") " pod="openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.482765 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2f54461-90e7-4a7e-9835-6a9fb3b79499-proxy-ca-bundles\") pod \"controller-manager-75694dbb5f-kdngs\" (UID: \"b2f54461-90e7-4a7e-9835-6a9fb3b79499\") " pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.482808 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsskt\" (UniqueName: \"kubernetes.io/projected/b2f54461-90e7-4a7e-9835-6a9fb3b79499-kube-api-access-dsskt\") pod \"controller-manager-75694dbb5f-kdngs\" (UID: \"b2f54461-90e7-4a7e-9835-6a9fb3b79499\") " pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.482835 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f8645a-b8e4-4de0-84da-da9bd3c8124f-config\") pod \"route-controller-manager-6d44f7fc68-b59kb\" (UID: \"03f8645a-b8e4-4de0-84da-da9bd3c8124f\") " pod="openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.482862 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl6k7\" (UniqueName: \"kubernetes.io/projected/03f8645a-b8e4-4de0-84da-da9bd3c8124f-kube-api-access-xl6k7\") pod \"route-controller-manager-6d44f7fc68-b59kb\" (UID: \"03f8645a-b8e4-4de0-84da-da9bd3c8124f\") " pod="openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.483282 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2f54461-90e7-4a7e-9835-6a9fb3b79499-config\") pod \"controller-manager-75694dbb5f-kdngs\" (UID: \"b2f54461-90e7-4a7e-9835-6a9fb3b79499\") " pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.483547 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2f54461-90e7-4a7e-9835-6a9fb3b79499-serving-cert\") pod \"controller-manager-75694dbb5f-kdngs\" (UID: \"b2f54461-90e7-4a7e-9835-6a9fb3b79499\") " pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.483578 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03f8645a-b8e4-4de0-84da-da9bd3c8124f-serving-cert\") pod \"route-controller-manager-6d44f7fc68-b59kb\" (UID: \"03f8645a-b8e4-4de0-84da-da9bd3c8124f\") " pod="openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.484227 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2f54461-90e7-4a7e-9835-6a9fb3b79499-client-ca\") pod \"controller-manager-75694dbb5f-kdngs\" (UID: \"b2f54461-90e7-4a7e-9835-6a9fb3b79499\") " pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.484285 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2f54461-90e7-4a7e-9835-6a9fb3b79499-proxy-ca-bundles\") pod \"controller-manager-75694dbb5f-kdngs\" (UID: \"b2f54461-90e7-4a7e-9835-6a9fb3b79499\") " pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.484461 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f8645a-b8e4-4de0-84da-da9bd3c8124f-config\") pod \"route-controller-manager-6d44f7fc68-b59kb\" (UID: \"03f8645a-b8e4-4de0-84da-da9bd3c8124f\") " pod="openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.484642 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2f54461-90e7-4a7e-9835-6a9fb3b79499-config\") pod \"controller-manager-75694dbb5f-kdngs\" (UID: \"b2f54461-90e7-4a7e-9835-6a9fb3b79499\") " pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.485333 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03f8645a-b8e4-4de0-84da-da9bd3c8124f-client-ca\") pod \"route-controller-manager-6d44f7fc68-b59kb\" (UID: \"03f8645a-b8e4-4de0-84da-da9bd3c8124f\") " pod="openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.491515 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2f54461-90e7-4a7e-9835-6a9fb3b79499-serving-cert\") pod \"controller-manager-75694dbb5f-kdngs\" (UID: \"b2f54461-90e7-4a7e-9835-6a9fb3b79499\") " pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.491105 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03f8645a-b8e4-4de0-84da-da9bd3c8124f-serving-cert\") pod \"route-controller-manager-6d44f7fc68-b59kb\" (UID: \"03f8645a-b8e4-4de0-84da-da9bd3c8124f\") " pod="openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.498843 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl6k7\" (UniqueName: \"kubernetes.io/projected/03f8645a-b8e4-4de0-84da-da9bd3c8124f-kube-api-access-xl6k7\") pod \"route-controller-manager-6d44f7fc68-b59kb\" (UID: \"03f8645a-b8e4-4de0-84da-da9bd3c8124f\") " pod="openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.499099 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsskt\" (UniqueName: \"kubernetes.io/projected/b2f54461-90e7-4a7e-9835-6a9fb3b79499-kube-api-access-dsskt\") pod \"controller-manager-75694dbb5f-kdngs\" (UID: \"b2f54461-90e7-4a7e-9835-6a9fb3b79499\") " pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.691977 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb" Feb 16 17:06:18 crc kubenswrapper[4870]: I0216 17:06:18.701799 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" Feb 16 17:06:19 crc kubenswrapper[4870]: I0216 17:06:19.143580 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75694dbb5f-kdngs"] Feb 16 17:06:19 crc kubenswrapper[4870]: I0216 17:06:19.146934 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb"] Feb 16 17:06:19 crc kubenswrapper[4870]: I0216 17:06:19.949283 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb" event={"ID":"03f8645a-b8e4-4de0-84da-da9bd3c8124f","Type":"ContainerStarted","Data":"c52a4d1fe310ef1fe8953caa7735f21318bc772c741def7aca199a55425b4ee7"} Feb 16 17:06:19 crc kubenswrapper[4870]: I0216 17:06:19.949681 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb" Feb 16 17:06:19 crc kubenswrapper[4870]: I0216 17:06:19.949701 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb" event={"ID":"03f8645a-b8e4-4de0-84da-da9bd3c8124f","Type":"ContainerStarted","Data":"a3f64799cebef330e83366afb948a9f1353900677f6f7910f229823f962714f7"} Feb 16 17:06:19 crc kubenswrapper[4870]: I0216 17:06:19.951389 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" event={"ID":"b2f54461-90e7-4a7e-9835-6a9fb3b79499","Type":"ContainerStarted","Data":"b5713e76a9a2e8c78fafd99e610030463267ee252b7a4b2c50eb85810789a59d"} Feb 16 17:06:19 crc kubenswrapper[4870]: I0216 17:06:19.951442 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" event={"ID":"b2f54461-90e7-4a7e-9835-6a9fb3b79499","Type":"ContainerStarted","Data":"2255dbe4809b0567ab5a72899ab10ff944818d066ad830f85218e739490ea9b7"} Feb 16 17:06:19 crc kubenswrapper[4870]: I0216 17:06:19.952295 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" Feb 16 17:06:19 crc kubenswrapper[4870]: I0216 17:06:19.958217 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" Feb 16 17:06:19 crc kubenswrapper[4870]: I0216 17:06:19.958830 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb" Feb 16 17:06:19 crc kubenswrapper[4870]: I0216 17:06:19.968601 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d44f7fc68-b59kb" podStartSLOduration=3.968579939 podStartE2EDuration="3.968579939s" podCreationTimestamp="2026-02-16 17:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:06:19.966635332 +0000 UTC m=+384.450099736" watchObservedRunningTime="2026-02-16 17:06:19.968579939 +0000 UTC m=+384.452044333" Feb 16 17:06:20 crc kubenswrapper[4870]: I0216 17:06:20.024030 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-75694dbb5f-kdngs" podStartSLOduration=4.024009885 podStartE2EDuration="4.024009885s" podCreationTimestamp="2026-02-16 17:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:06:20.023646594 +0000 UTC m=+384.507110988" watchObservedRunningTime="2026-02-16 17:06:20.024009885 +0000 UTC m=+384.507474279" Feb 16 17:06:28 crc kubenswrapper[4870]: I0216 17:06:28.770015 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-cvzvh" Feb 16 17:06:28 crc kubenswrapper[4870]: I0216 17:06:28.830675 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cf6bm"] Feb 16 17:06:35 crc kubenswrapper[4870]: I0216 17:06:35.367453 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:06:35 crc kubenswrapper[4870]: I0216 17:06:35.368172 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:06:35 crc kubenswrapper[4870]: I0216 17:06:35.368276 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:06:35 crc kubenswrapper[4870]: I0216 17:06:35.369264 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"563d08ece6d8d03837c0e89113bb97e1e95888579fc4a7e6ea7811bf1591b1d0"} pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:06:35 crc kubenswrapper[4870]: I0216 17:06:35.369361 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" containerID="cri-o://563d08ece6d8d03837c0e89113bb97e1e95888579fc4a7e6ea7811bf1591b1d0" gracePeriod=600 Feb 16 17:06:36 crc kubenswrapper[4870]: I0216 17:06:36.067086 4870 generic.go:334] "Generic (PLEG): container finished" podID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerID="563d08ece6d8d03837c0e89113bb97e1e95888579fc4a7e6ea7811bf1591b1d0" exitCode=0 Feb 16 17:06:36 crc kubenswrapper[4870]: I0216 17:06:36.067199 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerDied","Data":"563d08ece6d8d03837c0e89113bb97e1e95888579fc4a7e6ea7811bf1591b1d0"} Feb 16 17:06:36 crc kubenswrapper[4870]: I0216 17:06:36.067754 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerStarted","Data":"d7ca0ad42a015d4e082134ca747039ba8a51f27cc8bcf372698a3dfdcb0045da"} Feb 16 17:06:36 crc kubenswrapper[4870]: I0216 17:06:36.067809 4870 scope.go:117] "RemoveContainer" containerID="5a9cf7157bbb01397ac7936495386a27de8cfbb887938ec788adf37fc58b093b" Feb 16 17:06:38 crc kubenswrapper[4870]: I0216 17:06:38.923980 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wdq4b"] Feb 16 17:06:38 crc kubenswrapper[4870]: I0216 17:06:38.925386 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wdq4b" podUID="2b63cc22-5778-4805-b6fd-97f2ce43fda1" containerName="registry-server" containerID="cri-o://cd9e6630f40a66fe711a86a4a84de93ccefa6433cc5cae058a71370e4e6134de" gracePeriod=30 Feb 16 17:06:38 crc kubenswrapper[4870]: I0216 17:06:38.939163 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w5rt4"] Feb 16 17:06:38 crc kubenswrapper[4870]: I0216 17:06:38.939494 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w5rt4" podUID="53e01b72-44e9-4f22-833e-9972542aca29" containerName="registry-server" containerID="cri-o://1752a86a8fe7087a863ea012c6297a318b5aaca5bd187be84f4152af9592b0df" gracePeriod=30 Feb 16 17:06:38 crc kubenswrapper[4870]: I0216 17:06:38.942443 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4jpbt"] Feb 16 17:06:38 crc kubenswrapper[4870]: I0216 17:06:38.942635 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" podUID="d9ed0cdf-88f2-42cd-93e9-22517410ca31" containerName="marketplace-operator" containerID="cri-o://b2d296d115598835e03da7902c27d6058d062291004bf60748bcda92f013ca74" gracePeriod=30 Feb 16 17:06:38 crc kubenswrapper[4870]: I0216 17:06:38.953944 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qrlg6"] Feb 16 17:06:38 crc kubenswrapper[4870]: I0216 17:06:38.954215 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qrlg6" podUID="c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf" containerName="registry-server" containerID="cri-o://adecc2af7271c47ec968f6f90eca9456bc10b12a5ca533a5bdb20b48cf321769" gracePeriod=30 Feb 16 17:06:38 crc kubenswrapper[4870]: I0216 17:06:38.959519 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nr8mf"] Feb 16 17:06:38 crc kubenswrapper[4870]: I0216 17:06:38.959814 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nr8mf" podUID="588600bc-c342-4b4a-a755-0d8b541f0ca1" containerName="registry-server" containerID="cri-o://142dbe04c376542b582e1f6d02093f4e59951e77a27f7f1dbd4424dffc087c6b" gracePeriod=30 Feb 16 17:06:38 crc kubenswrapper[4870]: I0216 17:06:38.969230 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d4w5s"] Feb 16 17:06:38 crc kubenswrapper[4870]: I0216 17:06:38.969904 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d4w5s" Feb 16 17:06:38 crc kubenswrapper[4870]: I0216 17:06:38.992631 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d4w5s"] Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.073963 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bh6n\" (UniqueName: \"kubernetes.io/projected/2d50d687-7be2-4b64-9b82-fe66fd2d091a-kube-api-access-4bh6n\") pod \"marketplace-operator-79b997595-d4w5s\" (UID: \"2d50d687-7be2-4b64-9b82-fe66fd2d091a\") " pod="openshift-marketplace/marketplace-operator-79b997595-d4w5s" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.074026 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d50d687-7be2-4b64-9b82-fe66fd2d091a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d4w5s\" (UID: \"2d50d687-7be2-4b64-9b82-fe66fd2d091a\") " pod="openshift-marketplace/marketplace-operator-79b997595-d4w5s" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.074117 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2d50d687-7be2-4b64-9b82-fe66fd2d091a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d4w5s\" (UID: \"2d50d687-7be2-4b64-9b82-fe66fd2d091a\") " pod="openshift-marketplace/marketplace-operator-79b997595-d4w5s" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.097315 4870 generic.go:334] "Generic (PLEG): container finished" podID="d9ed0cdf-88f2-42cd-93e9-22517410ca31" containerID="b2d296d115598835e03da7902c27d6058d062291004bf60748bcda92f013ca74" exitCode=0 Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.097403 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" event={"ID":"d9ed0cdf-88f2-42cd-93e9-22517410ca31","Type":"ContainerDied","Data":"b2d296d115598835e03da7902c27d6058d062291004bf60748bcda92f013ca74"} Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.097439 4870 scope.go:117] "RemoveContainer" containerID="e960c3f1e997009c49bfe6aeeee46aa2872a57f240ed237f51979827aa6c0f1c" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.121112 4870 generic.go:334] "Generic (PLEG): container finished" podID="53e01b72-44e9-4f22-833e-9972542aca29" containerID="1752a86a8fe7087a863ea012c6297a318b5aaca5bd187be84f4152af9592b0df" exitCode=0 Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.121216 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5rt4" event={"ID":"53e01b72-44e9-4f22-833e-9972542aca29","Type":"ContainerDied","Data":"1752a86a8fe7087a863ea012c6297a318b5aaca5bd187be84f4152af9592b0df"} Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.138075 4870 generic.go:334] "Generic (PLEG): container finished" podID="2b63cc22-5778-4805-b6fd-97f2ce43fda1" containerID="cd9e6630f40a66fe711a86a4a84de93ccefa6433cc5cae058a71370e4e6134de" exitCode=0 Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.138160 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdq4b" event={"ID":"2b63cc22-5778-4805-b6fd-97f2ce43fda1","Type":"ContainerDied","Data":"cd9e6630f40a66fe711a86a4a84de93ccefa6433cc5cae058a71370e4e6134de"} Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.144989 4870 generic.go:334] "Generic (PLEG): container finished" podID="c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf" containerID="adecc2af7271c47ec968f6f90eca9456bc10b12a5ca533a5bdb20b48cf321769" exitCode=0 Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.145042 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qrlg6" event={"ID":"c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf","Type":"ContainerDied","Data":"adecc2af7271c47ec968f6f90eca9456bc10b12a5ca533a5bdb20b48cf321769"} Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.148753 4870 generic.go:334] "Generic (PLEG): container finished" podID="588600bc-c342-4b4a-a755-0d8b541f0ca1" containerID="142dbe04c376542b582e1f6d02093f4e59951e77a27f7f1dbd4424dffc087c6b" exitCode=0 Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.148799 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nr8mf" event={"ID":"588600bc-c342-4b4a-a755-0d8b541f0ca1","Type":"ContainerDied","Data":"142dbe04c376542b582e1f6d02093f4e59951e77a27f7f1dbd4424dffc087c6b"} Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.176589 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2d50d687-7be2-4b64-9b82-fe66fd2d091a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d4w5s\" (UID: \"2d50d687-7be2-4b64-9b82-fe66fd2d091a\") " pod="openshift-marketplace/marketplace-operator-79b997595-d4w5s" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.176719 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bh6n\" (UniqueName: \"kubernetes.io/projected/2d50d687-7be2-4b64-9b82-fe66fd2d091a-kube-api-access-4bh6n\") pod \"marketplace-operator-79b997595-d4w5s\" (UID: \"2d50d687-7be2-4b64-9b82-fe66fd2d091a\") " pod="openshift-marketplace/marketplace-operator-79b997595-d4w5s" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.176751 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d50d687-7be2-4b64-9b82-fe66fd2d091a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d4w5s\" (UID: \"2d50d687-7be2-4b64-9b82-fe66fd2d091a\") " pod="openshift-marketplace/marketplace-operator-79b997595-d4w5s" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.178379 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d50d687-7be2-4b64-9b82-fe66fd2d091a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d4w5s\" (UID: \"2d50d687-7be2-4b64-9b82-fe66fd2d091a\") " pod="openshift-marketplace/marketplace-operator-79b997595-d4w5s" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.189997 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2d50d687-7be2-4b64-9b82-fe66fd2d091a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d4w5s\" (UID: \"2d50d687-7be2-4b64-9b82-fe66fd2d091a\") " pod="openshift-marketplace/marketplace-operator-79b997595-d4w5s" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.204796 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bh6n\" (UniqueName: \"kubernetes.io/projected/2d50d687-7be2-4b64-9b82-fe66fd2d091a-kube-api-access-4bh6n\") pod \"marketplace-operator-79b997595-d4w5s\" (UID: \"2d50d687-7be2-4b64-9b82-fe66fd2d091a\") " pod="openshift-marketplace/marketplace-operator-79b997595-d4w5s" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.287876 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d4w5s" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.497412 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wdq4b" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.577803 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.584310 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b63cc22-5778-4805-b6fd-97f2ce43fda1-catalog-content\") pod \"2b63cc22-5778-4805-b6fd-97f2ce43fda1\" (UID: \"2b63cc22-5778-4805-b6fd-97f2ce43fda1\") " Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.584388 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b63cc22-5778-4805-b6fd-97f2ce43fda1-utilities\") pod \"2b63cc22-5778-4805-b6fd-97f2ce43fda1\" (UID: \"2b63cc22-5778-4805-b6fd-97f2ce43fda1\") " Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.584414 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snpms\" (UniqueName: \"kubernetes.io/projected/2b63cc22-5778-4805-b6fd-97f2ce43fda1-kube-api-access-snpms\") pod \"2b63cc22-5778-4805-b6fd-97f2ce43fda1\" (UID: \"2b63cc22-5778-4805-b6fd-97f2ce43fda1\") " Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.585502 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b63cc22-5778-4805-b6fd-97f2ce43fda1-utilities" (OuterVolumeSpecName: "utilities") pod "2b63cc22-5778-4805-b6fd-97f2ce43fda1" (UID: "2b63cc22-5778-4805-b6fd-97f2ce43fda1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.590392 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b63cc22-5778-4805-b6fd-97f2ce43fda1-kube-api-access-snpms" (OuterVolumeSpecName: "kube-api-access-snpms") pod "2b63cc22-5778-4805-b6fd-97f2ce43fda1" (UID: "2b63cc22-5778-4805-b6fd-97f2ce43fda1"). InnerVolumeSpecName "kube-api-access-snpms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:06:39 crc kubenswrapper[4870]: E0216 17:06:39.606852 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of adecc2af7271c47ec968f6f90eca9456bc10b12a5ca533a5bdb20b48cf321769 is running failed: container process not found" containerID="adecc2af7271c47ec968f6f90eca9456bc10b12a5ca533a5bdb20b48cf321769" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 17:06:39 crc kubenswrapper[4870]: E0216 17:06:39.607360 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of adecc2af7271c47ec968f6f90eca9456bc10b12a5ca533a5bdb20b48cf321769 is running failed: container process not found" containerID="adecc2af7271c47ec968f6f90eca9456bc10b12a5ca533a5bdb20b48cf321769" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 17:06:39 crc kubenswrapper[4870]: E0216 17:06:39.608606 4870 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of adecc2af7271c47ec968f6f90eca9456bc10b12a5ca533a5bdb20b48cf321769 is running failed: container process not found" containerID="adecc2af7271c47ec968f6f90eca9456bc10b12a5ca533a5bdb20b48cf321769" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 17:06:39 crc kubenswrapper[4870]: E0216 17:06:39.608691 4870 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of adecc2af7271c47ec968f6f90eca9456bc10b12a5ca533a5bdb20b48cf321769 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-qrlg6" podUID="c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf" containerName="registry-server" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.625650 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nr8mf" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.633921 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qrlg6" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.638620 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w5rt4" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.646253 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b63cc22-5778-4805-b6fd-97f2ce43fda1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b63cc22-5778-4805-b6fd-97f2ce43fda1" (UID: "2b63cc22-5778-4805-b6fd-97f2ce43fda1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.689520 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9ed0cdf-88f2-42cd-93e9-22517410ca31-marketplace-trusted-ca\") pod \"d9ed0cdf-88f2-42cd-93e9-22517410ca31\" (UID: \"d9ed0cdf-88f2-42cd-93e9-22517410ca31\") " Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.689598 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d9ed0cdf-88f2-42cd-93e9-22517410ca31-marketplace-operator-metrics\") pod \"d9ed0cdf-88f2-42cd-93e9-22517410ca31\" (UID: \"d9ed0cdf-88f2-42cd-93e9-22517410ca31\") " Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.689711 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p48z4\" (UniqueName: \"kubernetes.io/projected/d9ed0cdf-88f2-42cd-93e9-22517410ca31-kube-api-access-p48z4\") pod \"d9ed0cdf-88f2-42cd-93e9-22517410ca31\" (UID: \"d9ed0cdf-88f2-42cd-93e9-22517410ca31\") " Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.689933 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b63cc22-5778-4805-b6fd-97f2ce43fda1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.689965 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b63cc22-5778-4805-b6fd-97f2ce43fda1-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.689978 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snpms\" (UniqueName: \"kubernetes.io/projected/2b63cc22-5778-4805-b6fd-97f2ce43fda1-kube-api-access-snpms\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.690340 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9ed0cdf-88f2-42cd-93e9-22517410ca31-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d9ed0cdf-88f2-42cd-93e9-22517410ca31" (UID: "d9ed0cdf-88f2-42cd-93e9-22517410ca31"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.693089 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9ed0cdf-88f2-42cd-93e9-22517410ca31-kube-api-access-p48z4" (OuterVolumeSpecName: "kube-api-access-p48z4") pod "d9ed0cdf-88f2-42cd-93e9-22517410ca31" (UID: "d9ed0cdf-88f2-42cd-93e9-22517410ca31"). InnerVolumeSpecName "kube-api-access-p48z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.693666 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9ed0cdf-88f2-42cd-93e9-22517410ca31-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d9ed0cdf-88f2-42cd-93e9-22517410ca31" (UID: "d9ed0cdf-88f2-42cd-93e9-22517410ca31"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.790729 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8czvc\" (UniqueName: \"kubernetes.io/projected/53e01b72-44e9-4f22-833e-9972542aca29-kube-api-access-8czvc\") pod \"53e01b72-44e9-4f22-833e-9972542aca29\" (UID: \"53e01b72-44e9-4f22-833e-9972542aca29\") " Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.790787 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skkfx\" (UniqueName: \"kubernetes.io/projected/588600bc-c342-4b4a-a755-0d8b541f0ca1-kube-api-access-skkfx\") pod \"588600bc-c342-4b4a-a755-0d8b541f0ca1\" (UID: \"588600bc-c342-4b4a-a755-0d8b541f0ca1\") " Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.790837 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf-catalog-content\") pod \"c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf\" (UID: \"c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf\") " Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.790891 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e01b72-44e9-4f22-833e-9972542aca29-utilities\") pod \"53e01b72-44e9-4f22-833e-9972542aca29\" (UID: \"53e01b72-44e9-4f22-833e-9972542aca29\") " Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.790908 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/588600bc-c342-4b4a-a755-0d8b541f0ca1-catalog-content\") pod \"588600bc-c342-4b4a-a755-0d8b541f0ca1\" (UID: \"588600bc-c342-4b4a-a755-0d8b541f0ca1\") " Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.790938 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf-utilities\") pod \"c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf\" (UID: \"c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf\") " Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.790970 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/588600bc-c342-4b4a-a755-0d8b541f0ca1-utilities\") pod \"588600bc-c342-4b4a-a755-0d8b541f0ca1\" (UID: \"588600bc-c342-4b4a-a755-0d8b541f0ca1\") " Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.790987 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wv4c\" (UniqueName: \"kubernetes.io/projected/c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf-kube-api-access-7wv4c\") pod \"c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf\" (UID: \"c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf\") " Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.791018 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e01b72-44e9-4f22-833e-9972542aca29-catalog-content\") pod \"53e01b72-44e9-4f22-833e-9972542aca29\" (UID: \"53e01b72-44e9-4f22-833e-9972542aca29\") " Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.791222 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p48z4\" (UniqueName: \"kubernetes.io/projected/d9ed0cdf-88f2-42cd-93e9-22517410ca31-kube-api-access-p48z4\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.791236 4870 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9ed0cdf-88f2-42cd-93e9-22517410ca31-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.791245 4870 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d9ed0cdf-88f2-42cd-93e9-22517410ca31-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.791906 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53e01b72-44e9-4f22-833e-9972542aca29-utilities" (OuterVolumeSpecName: "utilities") pod "53e01b72-44e9-4f22-833e-9972542aca29" (UID: "53e01b72-44e9-4f22-833e-9972542aca29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.791906 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf-utilities" (OuterVolumeSpecName: "utilities") pod "c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf" (UID: "c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.792692 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/588600bc-c342-4b4a-a755-0d8b541f0ca1-utilities" (OuterVolumeSpecName: "utilities") pod "588600bc-c342-4b4a-a755-0d8b541f0ca1" (UID: "588600bc-c342-4b4a-a755-0d8b541f0ca1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.793542 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/588600bc-c342-4b4a-a755-0d8b541f0ca1-kube-api-access-skkfx" (OuterVolumeSpecName: "kube-api-access-skkfx") pod "588600bc-c342-4b4a-a755-0d8b541f0ca1" (UID: "588600bc-c342-4b4a-a755-0d8b541f0ca1"). InnerVolumeSpecName "kube-api-access-skkfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.796266 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf-kube-api-access-7wv4c" (OuterVolumeSpecName: "kube-api-access-7wv4c") pod "c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf" (UID: "c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf"). InnerVolumeSpecName "kube-api-access-7wv4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.798496 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53e01b72-44e9-4f22-833e-9972542aca29-kube-api-access-8czvc" (OuterVolumeSpecName: "kube-api-access-8czvc") pod "53e01b72-44e9-4f22-833e-9972542aca29" (UID: "53e01b72-44e9-4f22-833e-9972542aca29"). InnerVolumeSpecName "kube-api-access-8czvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.812928 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf" (UID: "c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.848049 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53e01b72-44e9-4f22-833e-9972542aca29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53e01b72-44e9-4f22-833e-9972542aca29" (UID: "53e01b72-44e9-4f22-833e-9972542aca29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.868175 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d4w5s"] Feb 16 17:06:39 crc kubenswrapper[4870]: W0216 17:06:39.879360 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d50d687_7be2_4b64_9b82_fe66fd2d091a.slice/crio-d92c05877d6142e51155ae5216413017f3462581f462a1c55c3e937dbdaf0675 WatchSource:0}: Error finding container d92c05877d6142e51155ae5216413017f3462581f462a1c55c3e937dbdaf0675: Status 404 returned error can't find the container with id d92c05877d6142e51155ae5216413017f3462581f462a1c55c3e937dbdaf0675 Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.892279 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skkfx\" (UniqueName: \"kubernetes.io/projected/588600bc-c342-4b4a-a755-0d8b541f0ca1-kube-api-access-skkfx\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.892304 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.892315 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e01b72-44e9-4f22-833e-9972542aca29-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.892326 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.892335 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/588600bc-c342-4b4a-a755-0d8b541f0ca1-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.892343 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wv4c\" (UniqueName: \"kubernetes.io/projected/c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf-kube-api-access-7wv4c\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.892350 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e01b72-44e9-4f22-833e-9972542aca29-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.892359 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8czvc\" (UniqueName: \"kubernetes.io/projected/53e01b72-44e9-4f22-833e-9972542aca29-kube-api-access-8czvc\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.931203 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/588600bc-c342-4b4a-a755-0d8b541f0ca1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "588600bc-c342-4b4a-a755-0d8b541f0ca1" (UID: "588600bc-c342-4b4a-a755-0d8b541f0ca1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:06:39 crc kubenswrapper[4870]: I0216 17:06:39.993001 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/588600bc-c342-4b4a-a755-0d8b541f0ca1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.156530 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nr8mf" event={"ID":"588600bc-c342-4b4a-a755-0d8b541f0ca1","Type":"ContainerDied","Data":"556c2dfe2653a345edcbf966eb559526e0d7740ea4b5cf080e41724544f1bc4b"} Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.156553 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nr8mf" Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.156886 4870 scope.go:117] "RemoveContainer" containerID="142dbe04c376542b582e1f6d02093f4e59951e77a27f7f1dbd4424dffc087c6b" Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.158226 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d4w5s" event={"ID":"2d50d687-7be2-4b64-9b82-fe66fd2d091a","Type":"ContainerStarted","Data":"66bf968f6ca004b6bf307ca21ae27bf00acc2fe1617de2dc5840f09015be0a25"} Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.158260 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d4w5s" event={"ID":"2d50d687-7be2-4b64-9b82-fe66fd2d091a","Type":"ContainerStarted","Data":"d92c05877d6142e51155ae5216413017f3462581f462a1c55c3e937dbdaf0675"} Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.158438 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-d4w5s" Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.159647 4870 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-d4w5s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.63:8080/healthz\": dial tcp 10.217.0.63:8080: connect: connection refused" start-of-body= Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.159688 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-d4w5s" podUID="2d50d687-7be2-4b64-9b82-fe66fd2d091a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.63:8080/healthz\": dial tcp 10.217.0.63:8080: connect: connection refused" Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.159828 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" event={"ID":"d9ed0cdf-88f2-42cd-93e9-22517410ca31","Type":"ContainerDied","Data":"10eefd75e0f4df7f25b977f8aeaa6df9227ffb8f80a2c0c5cf8da7ce264db4df"} Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.159843 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4jpbt" Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.161937 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5rt4" event={"ID":"53e01b72-44e9-4f22-833e-9972542aca29","Type":"ContainerDied","Data":"cadf72f99151c2fdfff4d03fc6c556a3bdaeb8714a2a9d7c7dbca6efbf9a6312"} Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.161974 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w5rt4" Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.164015 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wdq4b" Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.164038 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdq4b" event={"ID":"2b63cc22-5778-4805-b6fd-97f2ce43fda1","Type":"ContainerDied","Data":"714343a040c580bde319d4e559d491e8e0261bdeef3089d5989689863ff8f4fc"} Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.170887 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qrlg6" event={"ID":"c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf","Type":"ContainerDied","Data":"6d225f44bc887afe6f09fb5c08e37f1890846a608c16fa3696839bf688fa5c12"} Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.170984 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qrlg6" Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.177926 4870 scope.go:117] "RemoveContainer" containerID="99ddf220e46558cf1cfb2a0e708d7546441a2b8ce1bdb373f86fe003aaabaae3" Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.178774 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-d4w5s" podStartSLOduration=2.178756702 podStartE2EDuration="2.178756702s" podCreationTimestamp="2026-02-16 17:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:06:40.178220866 +0000 UTC m=+404.661685250" watchObservedRunningTime="2026-02-16 17:06:40.178756702 +0000 UTC m=+404.662221076" Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.214739 4870 scope.go:117] "RemoveContainer" containerID="92665c2a66d1e4f722f998067f847933d66b236f415d91e02390047c4846301d" Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.216935 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nr8mf"] Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.234007 4870 scope.go:117] "RemoveContainer" containerID="b2d296d115598835e03da7902c27d6058d062291004bf60748bcda92f013ca74" Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.235513 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nr8mf"] Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.244023 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w5rt4"] Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.252907 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w5rt4"] Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.256185 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4jpbt"] Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.259862 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4jpbt"] Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.263231 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qrlg6"] Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.265207 4870 scope.go:117] "RemoveContainer" containerID="1752a86a8fe7087a863ea012c6297a318b5aaca5bd187be84f4152af9592b0df" Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.266455 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qrlg6"] Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.268657 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wdq4b"] Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.271883 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wdq4b"] Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.278534 4870 scope.go:117] "RemoveContainer" containerID="9db9050a91d47e0228a28605d0435c316dd012c2b734a88f0e2f219b4a3f4b3a" Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.292673 4870 scope.go:117] "RemoveContainer" containerID="57ecd99fb4d92f4aed47ae16670d3e913e224c57a5c26d4ee53fe07fc42103bc" Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.311796 4870 scope.go:117] "RemoveContainer" containerID="cd9e6630f40a66fe711a86a4a84de93ccefa6433cc5cae058a71370e4e6134de" Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.325167 4870 scope.go:117] "RemoveContainer" containerID="ada51e5d9e585ef849093746d09212c8f23dc4bc5ed458ec072a5318a412829f" Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.338668 4870 scope.go:117] "RemoveContainer" containerID="36aa45dad41518901c3fbddcc07773bd29aea24edb016cb1be344ac4a43aea7b" Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.352926 4870 scope.go:117] "RemoveContainer" containerID="adecc2af7271c47ec968f6f90eca9456bc10b12a5ca533a5bdb20b48cf321769" Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.365281 4870 scope.go:117] "RemoveContainer" containerID="49404836917e7e7eb2291b66763d2b5422d29a2a9140cc2a131f5b1aeac33fec" Feb 16 17:06:40 crc kubenswrapper[4870]: I0216 17:06:40.380346 4870 scope.go:117] "RemoveContainer" containerID="f797f268ebd8d7c60e724942dfa666d810c4f1fa4e12cd69aee56ae191ad4ea2" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.138494 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-44v98"] Feb 16 17:06:41 crc kubenswrapper[4870]: E0216 17:06:41.143068 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="588600bc-c342-4b4a-a755-0d8b541f0ca1" containerName="extract-content" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.143122 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="588600bc-c342-4b4a-a755-0d8b541f0ca1" containerName="extract-content" Feb 16 17:06:41 crc kubenswrapper[4870]: E0216 17:06:41.143148 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e01b72-44e9-4f22-833e-9972542aca29" containerName="registry-server" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.143162 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e01b72-44e9-4f22-833e-9972542aca29" containerName="registry-server" Feb 16 17:06:41 crc kubenswrapper[4870]: E0216 17:06:41.143180 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf" containerName="registry-server" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.143195 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf" containerName="registry-server" Feb 16 17:06:41 crc kubenswrapper[4870]: E0216 17:06:41.143215 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b63cc22-5778-4805-b6fd-97f2ce43fda1" containerName="registry-server" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.143227 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b63cc22-5778-4805-b6fd-97f2ce43fda1" containerName="registry-server" Feb 16 17:06:41 crc kubenswrapper[4870]: E0216 17:06:41.143272 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9ed0cdf-88f2-42cd-93e9-22517410ca31" containerName="marketplace-operator" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.143288 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9ed0cdf-88f2-42cd-93e9-22517410ca31" containerName="marketplace-operator" Feb 16 17:06:41 crc kubenswrapper[4870]: E0216 17:06:41.143306 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="588600bc-c342-4b4a-a755-0d8b541f0ca1" containerName="registry-server" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.143320 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="588600bc-c342-4b4a-a755-0d8b541f0ca1" containerName="registry-server" Feb 16 17:06:41 crc kubenswrapper[4870]: E0216 17:06:41.143341 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e01b72-44e9-4f22-833e-9972542aca29" containerName="extract-utilities" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.143357 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e01b72-44e9-4f22-833e-9972542aca29" containerName="extract-utilities" Feb 16 17:06:41 crc kubenswrapper[4870]: E0216 17:06:41.143388 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="588600bc-c342-4b4a-a755-0d8b541f0ca1" containerName="extract-utilities" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.143404 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="588600bc-c342-4b4a-a755-0d8b541f0ca1" containerName="extract-utilities" Feb 16 17:06:41 crc kubenswrapper[4870]: E0216 17:06:41.143423 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b63cc22-5778-4805-b6fd-97f2ce43fda1" containerName="extract-content" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.143438 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b63cc22-5778-4805-b6fd-97f2ce43fda1" containerName="extract-content" Feb 16 17:06:41 crc kubenswrapper[4870]: E0216 17:06:41.143460 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e01b72-44e9-4f22-833e-9972542aca29" containerName="extract-content" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.143474 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e01b72-44e9-4f22-833e-9972542aca29" containerName="extract-content" Feb 16 17:06:41 crc kubenswrapper[4870]: E0216 17:06:41.143498 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf" containerName="extract-utilities" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.143515 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf" containerName="extract-utilities" Feb 16 17:06:41 crc kubenswrapper[4870]: E0216 17:06:41.143535 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b63cc22-5778-4805-b6fd-97f2ce43fda1" containerName="extract-utilities" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.143547 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b63cc22-5778-4805-b6fd-97f2ce43fda1" containerName="extract-utilities" Feb 16 17:06:41 crc kubenswrapper[4870]: E0216 17:06:41.143566 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf" containerName="extract-content" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.143578 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf" containerName="extract-content" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.143741 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b63cc22-5778-4805-b6fd-97f2ce43fda1" containerName="registry-server" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.143759 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="588600bc-c342-4b4a-a755-0d8b541f0ca1" containerName="registry-server" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.143775 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9ed0cdf-88f2-42cd-93e9-22517410ca31" containerName="marketplace-operator" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.143792 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9ed0cdf-88f2-42cd-93e9-22517410ca31" containerName="marketplace-operator" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.143815 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="53e01b72-44e9-4f22-833e-9972542aca29" containerName="registry-server" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.143834 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf" containerName="registry-server" Feb 16 17:06:41 crc kubenswrapper[4870]: E0216 17:06:41.144015 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9ed0cdf-88f2-42cd-93e9-22517410ca31" containerName="marketplace-operator" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.144030 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9ed0cdf-88f2-42cd-93e9-22517410ca31" containerName="marketplace-operator" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.145823 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-44v98"] Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.146007 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44v98" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.148531 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.194988 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-d4w5s" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.330553 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0-catalog-content\") pod \"redhat-marketplace-44v98\" (UID: \"0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0\") " pod="openshift-marketplace/redhat-marketplace-44v98" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.330649 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0-utilities\") pod \"redhat-marketplace-44v98\" (UID: \"0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0\") " pod="openshift-marketplace/redhat-marketplace-44v98" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.330700 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zh25\" (UniqueName: \"kubernetes.io/projected/0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0-kube-api-access-7zh25\") pod \"redhat-marketplace-44v98\" (UID: \"0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0\") " pod="openshift-marketplace/redhat-marketplace-44v98" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.334672 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dwn58"] Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.335794 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dwn58" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.338032 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.350885 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dwn58"] Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.432265 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0-utilities\") pod \"redhat-marketplace-44v98\" (UID: \"0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0\") " pod="openshift-marketplace/redhat-marketplace-44v98" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.432323 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zh25\" (UniqueName: \"kubernetes.io/projected/0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0-kube-api-access-7zh25\") pod \"redhat-marketplace-44v98\" (UID: \"0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0\") " pod="openshift-marketplace/redhat-marketplace-44v98" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.432374 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0-catalog-content\") pod \"redhat-marketplace-44v98\" (UID: \"0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0\") " pod="openshift-marketplace/redhat-marketplace-44v98" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.432825 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0-utilities\") pod \"redhat-marketplace-44v98\" (UID: \"0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0\") " pod="openshift-marketplace/redhat-marketplace-44v98" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.432830 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0-catalog-content\") pod \"redhat-marketplace-44v98\" (UID: \"0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0\") " pod="openshift-marketplace/redhat-marketplace-44v98" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.451400 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zh25\" (UniqueName: \"kubernetes.io/projected/0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0-kube-api-access-7zh25\") pod \"redhat-marketplace-44v98\" (UID: \"0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0\") " pod="openshift-marketplace/redhat-marketplace-44v98" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.463073 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44v98" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.536301 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d25274f-1b87-4f2b-90aa-71dc0c0b3184-catalog-content\") pod \"redhat-operators-dwn58\" (UID: \"0d25274f-1b87-4f2b-90aa-71dc0c0b3184\") " pod="openshift-marketplace/redhat-operators-dwn58" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.536355 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d25274f-1b87-4f2b-90aa-71dc0c0b3184-utilities\") pod \"redhat-operators-dwn58\" (UID: \"0d25274f-1b87-4f2b-90aa-71dc0c0b3184\") " pod="openshift-marketplace/redhat-operators-dwn58" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.536418 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpfxl\" (UniqueName: \"kubernetes.io/projected/0d25274f-1b87-4f2b-90aa-71dc0c0b3184-kube-api-access-zpfxl\") pod \"redhat-operators-dwn58\" (UID: \"0d25274f-1b87-4f2b-90aa-71dc0c0b3184\") " pod="openshift-marketplace/redhat-operators-dwn58" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.637240 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpfxl\" (UniqueName: \"kubernetes.io/projected/0d25274f-1b87-4f2b-90aa-71dc0c0b3184-kube-api-access-zpfxl\") pod \"redhat-operators-dwn58\" (UID: \"0d25274f-1b87-4f2b-90aa-71dc0c0b3184\") " pod="openshift-marketplace/redhat-operators-dwn58" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.637320 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d25274f-1b87-4f2b-90aa-71dc0c0b3184-catalog-content\") pod \"redhat-operators-dwn58\" (UID: \"0d25274f-1b87-4f2b-90aa-71dc0c0b3184\") " pod="openshift-marketplace/redhat-operators-dwn58" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.637342 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d25274f-1b87-4f2b-90aa-71dc0c0b3184-utilities\") pod \"redhat-operators-dwn58\" (UID: \"0d25274f-1b87-4f2b-90aa-71dc0c0b3184\") " pod="openshift-marketplace/redhat-operators-dwn58" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.637984 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d25274f-1b87-4f2b-90aa-71dc0c0b3184-utilities\") pod \"redhat-operators-dwn58\" (UID: \"0d25274f-1b87-4f2b-90aa-71dc0c0b3184\") " pod="openshift-marketplace/redhat-operators-dwn58" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.638031 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d25274f-1b87-4f2b-90aa-71dc0c0b3184-catalog-content\") pod \"redhat-operators-dwn58\" (UID: \"0d25274f-1b87-4f2b-90aa-71dc0c0b3184\") " pod="openshift-marketplace/redhat-operators-dwn58" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.657532 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpfxl\" (UniqueName: \"kubernetes.io/projected/0d25274f-1b87-4f2b-90aa-71dc0c0b3184-kube-api-access-zpfxl\") pod \"redhat-operators-dwn58\" (UID: \"0d25274f-1b87-4f2b-90aa-71dc0c0b3184\") " pod="openshift-marketplace/redhat-operators-dwn58" Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.845869 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-44v98"] Feb 16 17:06:41 crc kubenswrapper[4870]: W0216 17:06:41.850649 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0da6a75d_9b18_4b2c_8e2f_356d5a7cd1e0.slice/crio-3544415d2f6feb129ab3dde64a2105d92e1260c07ad4187effac3a723043ac97 WatchSource:0}: Error finding container 3544415d2f6feb129ab3dde64a2105d92e1260c07ad4187effac3a723043ac97: Status 404 returned error can't find the container with id 3544415d2f6feb129ab3dde64a2105d92e1260c07ad4187effac3a723043ac97 Feb 16 17:06:41 crc kubenswrapper[4870]: I0216 17:06:41.955311 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dwn58" Feb 16 17:06:42 crc kubenswrapper[4870]: I0216 17:06:42.196883 4870 generic.go:334] "Generic (PLEG): container finished" podID="0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0" containerID="f12fdff762eaa2311074ed9b01e16c973a04c86b0b2241b448bd4268c5dac256" exitCode=0 Feb 16 17:06:42 crc kubenswrapper[4870]: I0216 17:06:42.196973 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44v98" event={"ID":"0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0","Type":"ContainerDied","Data":"f12fdff762eaa2311074ed9b01e16c973a04c86b0b2241b448bd4268c5dac256"} Feb 16 17:06:42 crc kubenswrapper[4870]: I0216 17:06:42.197017 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44v98" event={"ID":"0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0","Type":"ContainerStarted","Data":"3544415d2f6feb129ab3dde64a2105d92e1260c07ad4187effac3a723043ac97"} Feb 16 17:06:42 crc kubenswrapper[4870]: I0216 17:06:42.229786 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b63cc22-5778-4805-b6fd-97f2ce43fda1" path="/var/lib/kubelet/pods/2b63cc22-5778-4805-b6fd-97f2ce43fda1/volumes" Feb 16 17:06:42 crc kubenswrapper[4870]: I0216 17:06:42.230705 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53e01b72-44e9-4f22-833e-9972542aca29" path="/var/lib/kubelet/pods/53e01b72-44e9-4f22-833e-9972542aca29/volumes" Feb 16 17:06:42 crc kubenswrapper[4870]: I0216 17:06:42.231286 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="588600bc-c342-4b4a-a755-0d8b541f0ca1" path="/var/lib/kubelet/pods/588600bc-c342-4b4a-a755-0d8b541f0ca1/volumes" Feb 16 17:06:42 crc kubenswrapper[4870]: I0216 17:06:42.232289 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf" path="/var/lib/kubelet/pods/c8a6b6ab-b5ec-4627-9a9f-8a5623d442cf/volumes" Feb 16 17:06:42 crc kubenswrapper[4870]: I0216 17:06:42.233006 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9ed0cdf-88f2-42cd-93e9-22517410ca31" path="/var/lib/kubelet/pods/d9ed0cdf-88f2-42cd-93e9-22517410ca31/volumes" Feb 16 17:06:42 crc kubenswrapper[4870]: I0216 17:06:42.387289 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dwn58"] Feb 16 17:06:42 crc kubenswrapper[4870]: W0216 17:06:42.391961 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d25274f_1b87_4f2b_90aa_71dc0c0b3184.slice/crio-4baaee36a496956853ef0ffa0236e7c2e7c601201c618d2835fd5ccd7e532d8e WatchSource:0}: Error finding container 4baaee36a496956853ef0ffa0236e7c2e7c601201c618d2835fd5ccd7e532d8e: Status 404 returned error can't find the container with id 4baaee36a496956853ef0ffa0236e7c2e7c601201c618d2835fd5ccd7e532d8e Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.204676 4870 generic.go:334] "Generic (PLEG): container finished" podID="0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0" containerID="3949f91d9ea5022c1406f316ca90d10ec9126d88cdeb8b6d1e1ec35de24deaf6" exitCode=0 Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.204723 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44v98" event={"ID":"0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0","Type":"ContainerDied","Data":"3949f91d9ea5022c1406f316ca90d10ec9126d88cdeb8b6d1e1ec35de24deaf6"} Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.206589 4870 generic.go:334] "Generic (PLEG): container finished" podID="0d25274f-1b87-4f2b-90aa-71dc0c0b3184" containerID="c20c88f3a77d950f7c0cfe0a7dca7a147557737dc8c0e477a839b67ecc9d475c" exitCode=0 Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.206624 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwn58" event={"ID":"0d25274f-1b87-4f2b-90aa-71dc0c0b3184","Type":"ContainerDied","Data":"c20c88f3a77d950f7c0cfe0a7dca7a147557737dc8c0e477a839b67ecc9d475c"} Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.206652 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwn58" event={"ID":"0d25274f-1b87-4f2b-90aa-71dc0c0b3184","Type":"ContainerStarted","Data":"4baaee36a496956853ef0ffa0236e7c2e7c601201c618d2835fd5ccd7e532d8e"} Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.534996 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s487d"] Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.535896 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s487d" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.544509 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s487d"] Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.545327 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.662119 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db6fcf97-0653-4411-b5ae-a3af8532801d-catalog-content\") pod \"certified-operators-s487d\" (UID: \"db6fcf97-0653-4411-b5ae-a3af8532801d\") " pod="openshift-marketplace/certified-operators-s487d" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.662192 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db6fcf97-0653-4411-b5ae-a3af8532801d-utilities\") pod \"certified-operators-s487d\" (UID: \"db6fcf97-0653-4411-b5ae-a3af8532801d\") " pod="openshift-marketplace/certified-operators-s487d" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.662278 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7rsz\" (UniqueName: \"kubernetes.io/projected/db6fcf97-0653-4411-b5ae-a3af8532801d-kube-api-access-r7rsz\") pod \"certified-operators-s487d\" (UID: \"db6fcf97-0653-4411-b5ae-a3af8532801d\") " pod="openshift-marketplace/certified-operators-s487d" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.737790 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sz5lq"] Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.739338 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sz5lq" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.742017 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.746930 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sz5lq"] Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.763571 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7rsz\" (UniqueName: \"kubernetes.io/projected/db6fcf97-0653-4411-b5ae-a3af8532801d-kube-api-access-r7rsz\") pod \"certified-operators-s487d\" (UID: \"db6fcf97-0653-4411-b5ae-a3af8532801d\") " pod="openshift-marketplace/certified-operators-s487d" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.763644 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db6fcf97-0653-4411-b5ae-a3af8532801d-catalog-content\") pod \"certified-operators-s487d\" (UID: \"db6fcf97-0653-4411-b5ae-a3af8532801d\") " pod="openshift-marketplace/certified-operators-s487d" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.763683 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db6fcf97-0653-4411-b5ae-a3af8532801d-utilities\") pod \"certified-operators-s487d\" (UID: \"db6fcf97-0653-4411-b5ae-a3af8532801d\") " pod="openshift-marketplace/certified-operators-s487d" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.764138 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db6fcf97-0653-4411-b5ae-a3af8532801d-utilities\") pod \"certified-operators-s487d\" (UID: \"db6fcf97-0653-4411-b5ae-a3af8532801d\") " pod="openshift-marketplace/certified-operators-s487d" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.764575 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db6fcf97-0653-4411-b5ae-a3af8532801d-catalog-content\") pod \"certified-operators-s487d\" (UID: \"db6fcf97-0653-4411-b5ae-a3af8532801d\") " pod="openshift-marketplace/certified-operators-s487d" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.785140 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7rsz\" (UniqueName: \"kubernetes.io/projected/db6fcf97-0653-4411-b5ae-a3af8532801d-kube-api-access-r7rsz\") pod \"certified-operators-s487d\" (UID: \"db6fcf97-0653-4411-b5ae-a3af8532801d\") " pod="openshift-marketplace/certified-operators-s487d" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.863915 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s487d" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.864427 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc80d675-c023-458d-8287-3e56add1b1d2-utilities\") pod \"community-operators-sz5lq\" (UID: \"bc80d675-c023-458d-8287-3e56add1b1d2\") " pod="openshift-marketplace/community-operators-sz5lq" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.864471 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc80d675-c023-458d-8287-3e56add1b1d2-catalog-content\") pod \"community-operators-sz5lq\" (UID: \"bc80d675-c023-458d-8287-3e56add1b1d2\") " pod="openshift-marketplace/community-operators-sz5lq" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.864506 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6nh6\" (UniqueName: \"kubernetes.io/projected/bc80d675-c023-458d-8287-3e56add1b1d2-kube-api-access-r6nh6\") pod \"community-operators-sz5lq\" (UID: \"bc80d675-c023-458d-8287-3e56add1b1d2\") " pod="openshift-marketplace/community-operators-sz5lq" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.966594 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc80d675-c023-458d-8287-3e56add1b1d2-utilities\") pod \"community-operators-sz5lq\" (UID: \"bc80d675-c023-458d-8287-3e56add1b1d2\") " pod="openshift-marketplace/community-operators-sz5lq" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.966928 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc80d675-c023-458d-8287-3e56add1b1d2-catalog-content\") pod \"community-operators-sz5lq\" (UID: \"bc80d675-c023-458d-8287-3e56add1b1d2\") " pod="openshift-marketplace/community-operators-sz5lq" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.966989 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6nh6\" (UniqueName: \"kubernetes.io/projected/bc80d675-c023-458d-8287-3e56add1b1d2-kube-api-access-r6nh6\") pod \"community-operators-sz5lq\" (UID: \"bc80d675-c023-458d-8287-3e56add1b1d2\") " pod="openshift-marketplace/community-operators-sz5lq" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.967802 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc80d675-c023-458d-8287-3e56add1b1d2-catalog-content\") pod \"community-operators-sz5lq\" (UID: \"bc80d675-c023-458d-8287-3e56add1b1d2\") " pod="openshift-marketplace/community-operators-sz5lq" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.969286 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc80d675-c023-458d-8287-3e56add1b1d2-utilities\") pod \"community-operators-sz5lq\" (UID: \"bc80d675-c023-458d-8287-3e56add1b1d2\") " pod="openshift-marketplace/community-operators-sz5lq" Feb 16 17:06:43 crc kubenswrapper[4870]: I0216 17:06:43.984858 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6nh6\" (UniqueName: \"kubernetes.io/projected/bc80d675-c023-458d-8287-3e56add1b1d2-kube-api-access-r6nh6\") pod \"community-operators-sz5lq\" (UID: \"bc80d675-c023-458d-8287-3e56add1b1d2\") " pod="openshift-marketplace/community-operators-sz5lq" Feb 16 17:06:44 crc kubenswrapper[4870]: I0216 17:06:44.084295 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sz5lq" Feb 16 17:06:44 crc kubenswrapper[4870]: I0216 17:06:44.214931 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwn58" event={"ID":"0d25274f-1b87-4f2b-90aa-71dc0c0b3184","Type":"ContainerStarted","Data":"ae3ee005883381471e720e4d601b028cffe4f1eba1cf8131f79f5f3ba642b239"} Feb 16 17:06:44 crc kubenswrapper[4870]: I0216 17:06:44.230064 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44v98" event={"ID":"0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0","Type":"ContainerStarted","Data":"c4ec3c62ceccb3ece2a5bec416e21f4898c312280bb47320933d5fb3b0966155"} Feb 16 17:06:44 crc kubenswrapper[4870]: I0216 17:06:44.257993 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-44v98" podStartSLOduration=1.768835932 podStartE2EDuration="3.257977828s" podCreationTimestamp="2026-02-16 17:06:41 +0000 UTC" firstStartedPulling="2026-02-16 17:06:42.198822122 +0000 UTC m=+406.682286506" lastFinishedPulling="2026-02-16 17:06:43.687964018 +0000 UTC m=+408.171428402" observedRunningTime="2026-02-16 17:06:44.256804534 +0000 UTC m=+408.740268918" watchObservedRunningTime="2026-02-16 17:06:44.257977828 +0000 UTC m=+408.741442212" Feb 16 17:06:44 crc kubenswrapper[4870]: I0216 17:06:44.322690 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s487d"] Feb 16 17:06:44 crc kubenswrapper[4870]: W0216 17:06:44.332515 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb6fcf97_0653_4411_b5ae_a3af8532801d.slice/crio-787bb4458d0bfbe4aadec55476fab9ba6a9fb65ba8011f09e452a73ffacade2e WatchSource:0}: Error finding container 787bb4458d0bfbe4aadec55476fab9ba6a9fb65ba8011f09e452a73ffacade2e: Status 404 returned error can't find the container with id 787bb4458d0bfbe4aadec55476fab9ba6a9fb65ba8011f09e452a73ffacade2e Feb 16 17:06:44 crc kubenswrapper[4870]: I0216 17:06:44.359233 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sz5lq"] Feb 16 17:06:44 crc kubenswrapper[4870]: W0216 17:06:44.380527 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc80d675_c023_458d_8287_3e56add1b1d2.slice/crio-a850723b82dda899448d56fcd0fc8267d06f567a5f855211354689f355481502 WatchSource:0}: Error finding container a850723b82dda899448d56fcd0fc8267d06f567a5f855211354689f355481502: Status 404 returned error can't find the container with id a850723b82dda899448d56fcd0fc8267d06f567a5f855211354689f355481502 Feb 16 17:06:45 crc kubenswrapper[4870]: I0216 17:06:45.230629 4870 generic.go:334] "Generic (PLEG): container finished" podID="0d25274f-1b87-4f2b-90aa-71dc0c0b3184" containerID="ae3ee005883381471e720e4d601b028cffe4f1eba1cf8131f79f5f3ba642b239" exitCode=0 Feb 16 17:06:45 crc kubenswrapper[4870]: I0216 17:06:45.230748 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwn58" event={"ID":"0d25274f-1b87-4f2b-90aa-71dc0c0b3184","Type":"ContainerDied","Data":"ae3ee005883381471e720e4d601b028cffe4f1eba1cf8131f79f5f3ba642b239"} Feb 16 17:06:45 crc kubenswrapper[4870]: I0216 17:06:45.234600 4870 generic.go:334] "Generic (PLEG): container finished" podID="bc80d675-c023-458d-8287-3e56add1b1d2" containerID="c78b4f9462b30c5089c6bc5ee3e051005b6115b0f6c51bf54652d83a2fb77381" exitCode=0 Feb 16 17:06:45 crc kubenswrapper[4870]: I0216 17:06:45.234649 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sz5lq" event={"ID":"bc80d675-c023-458d-8287-3e56add1b1d2","Type":"ContainerDied","Data":"c78b4f9462b30c5089c6bc5ee3e051005b6115b0f6c51bf54652d83a2fb77381"} Feb 16 17:06:45 crc kubenswrapper[4870]: I0216 17:06:45.234673 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sz5lq" event={"ID":"bc80d675-c023-458d-8287-3e56add1b1d2","Type":"ContainerStarted","Data":"a850723b82dda899448d56fcd0fc8267d06f567a5f855211354689f355481502"} Feb 16 17:06:45 crc kubenswrapper[4870]: I0216 17:06:45.237741 4870 generic.go:334] "Generic (PLEG): container finished" podID="db6fcf97-0653-4411-b5ae-a3af8532801d" containerID="3e153e279f75e6afc18c29458a8afe3d86467394cfeaf46942bd75fd35608ad2" exitCode=0 Feb 16 17:06:45 crc kubenswrapper[4870]: I0216 17:06:45.237838 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s487d" event={"ID":"db6fcf97-0653-4411-b5ae-a3af8532801d","Type":"ContainerDied","Data":"3e153e279f75e6afc18c29458a8afe3d86467394cfeaf46942bd75fd35608ad2"} Feb 16 17:06:45 crc kubenswrapper[4870]: I0216 17:06:45.237887 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s487d" event={"ID":"db6fcf97-0653-4411-b5ae-a3af8532801d","Type":"ContainerStarted","Data":"787bb4458d0bfbe4aadec55476fab9ba6a9fb65ba8011f09e452a73ffacade2e"} Feb 16 17:06:46 crc kubenswrapper[4870]: I0216 17:06:46.244522 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sz5lq" event={"ID":"bc80d675-c023-458d-8287-3e56add1b1d2","Type":"ContainerStarted","Data":"37d292b31797098864129cac90954277f2931d8b1de76318efbdd3987c45c0cf"} Feb 16 17:06:46 crc kubenswrapper[4870]: I0216 17:06:46.251781 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s487d" event={"ID":"db6fcf97-0653-4411-b5ae-a3af8532801d","Type":"ContainerStarted","Data":"1e275c0f2addc28c03afb32453d4e3f3fa8ae3fdfccbdbf7d90a4d0a35c5db0d"} Feb 16 17:06:46 crc kubenswrapper[4870]: I0216 17:06:46.256103 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwn58" event={"ID":"0d25274f-1b87-4f2b-90aa-71dc0c0b3184","Type":"ContainerStarted","Data":"ccd3ceac311673054e6d2c9d6297669bdd51064ac2b5c23003921ed5dc73259e"} Feb 16 17:06:46 crc kubenswrapper[4870]: I0216 17:06:46.301476 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dwn58" podStartSLOduration=2.9251056269999998 podStartE2EDuration="5.30146218s" podCreationTimestamp="2026-02-16 17:06:41 +0000 UTC" firstStartedPulling="2026-02-16 17:06:43.208011718 +0000 UTC m=+407.691476142" lastFinishedPulling="2026-02-16 17:06:45.584368311 +0000 UTC m=+410.067832695" observedRunningTime="2026-02-16 17:06:46.299828882 +0000 UTC m=+410.783293286" watchObservedRunningTime="2026-02-16 17:06:46.30146218 +0000 UTC m=+410.784926564" Feb 16 17:06:47 crc kubenswrapper[4870]: I0216 17:06:47.264721 4870 generic.go:334] "Generic (PLEG): container finished" podID="db6fcf97-0653-4411-b5ae-a3af8532801d" containerID="1e275c0f2addc28c03afb32453d4e3f3fa8ae3fdfccbdbf7d90a4d0a35c5db0d" exitCode=0 Feb 16 17:06:47 crc kubenswrapper[4870]: I0216 17:06:47.264834 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s487d" event={"ID":"db6fcf97-0653-4411-b5ae-a3af8532801d","Type":"ContainerDied","Data":"1e275c0f2addc28c03afb32453d4e3f3fa8ae3fdfccbdbf7d90a4d0a35c5db0d"} Feb 16 17:06:47 crc kubenswrapper[4870]: I0216 17:06:47.267370 4870 generic.go:334] "Generic (PLEG): container finished" podID="bc80d675-c023-458d-8287-3e56add1b1d2" containerID="37d292b31797098864129cac90954277f2931d8b1de76318efbdd3987c45c0cf" exitCode=0 Feb 16 17:06:47 crc kubenswrapper[4870]: I0216 17:06:47.267453 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sz5lq" event={"ID":"bc80d675-c023-458d-8287-3e56add1b1d2","Type":"ContainerDied","Data":"37d292b31797098864129cac90954277f2931d8b1de76318efbdd3987c45c0cf"} Feb 16 17:06:48 crc kubenswrapper[4870]: I0216 17:06:48.277014 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s487d" event={"ID":"db6fcf97-0653-4411-b5ae-a3af8532801d","Type":"ContainerStarted","Data":"919e22497000a7483f11b11820ce3ca05744576f52aab7ce22bc80574db064bd"} Feb 16 17:06:48 crc kubenswrapper[4870]: I0216 17:06:48.279009 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sz5lq" event={"ID":"bc80d675-c023-458d-8287-3e56add1b1d2","Type":"ContainerStarted","Data":"e95be8ab84c14683730e7bb7b1cd3b025ad6ba445add554db5cfce197110bef2"} Feb 16 17:06:48 crc kubenswrapper[4870]: I0216 17:06:48.298835 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s487d" podStartSLOduration=2.700486528 podStartE2EDuration="5.298817438s" podCreationTimestamp="2026-02-16 17:06:43 +0000 UTC" firstStartedPulling="2026-02-16 17:06:45.238924662 +0000 UTC m=+409.722389056" lastFinishedPulling="2026-02-16 17:06:47.837255582 +0000 UTC m=+412.320719966" observedRunningTime="2026-02-16 17:06:48.294677958 +0000 UTC m=+412.778142342" watchObservedRunningTime="2026-02-16 17:06:48.298817438 +0000 UTC m=+412.782281822" Feb 16 17:06:48 crc kubenswrapper[4870]: I0216 17:06:48.311021 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sz5lq" podStartSLOduration=2.73525324 podStartE2EDuration="5.311003853s" podCreationTimestamp="2026-02-16 17:06:43 +0000 UTC" firstStartedPulling="2026-02-16 17:06:45.235916645 +0000 UTC m=+409.719381029" lastFinishedPulling="2026-02-16 17:06:47.811667258 +0000 UTC m=+412.295131642" observedRunningTime="2026-02-16 17:06:48.308935642 +0000 UTC m=+412.792400026" watchObservedRunningTime="2026-02-16 17:06:48.311003853 +0000 UTC m=+412.794468237" Feb 16 17:06:51 crc kubenswrapper[4870]: I0216 17:06:51.464270 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-44v98" Feb 16 17:06:51 crc kubenswrapper[4870]: I0216 17:06:51.464750 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-44v98" Feb 16 17:06:51 crc kubenswrapper[4870]: I0216 17:06:51.534507 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-44v98" Feb 16 17:06:51 crc kubenswrapper[4870]: I0216 17:06:51.955855 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dwn58" Feb 16 17:06:51 crc kubenswrapper[4870]: I0216 17:06:51.955935 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dwn58" Feb 16 17:06:52 crc kubenswrapper[4870]: I0216 17:06:52.003865 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dwn58" Feb 16 17:06:52 crc kubenswrapper[4870]: I0216 17:06:52.344121 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dwn58" Feb 16 17:06:52 crc kubenswrapper[4870]: I0216 17:06:52.350833 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-44v98" Feb 16 17:06:53 crc kubenswrapper[4870]: I0216 17:06:53.864765 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s487d" Feb 16 17:06:53 crc kubenswrapper[4870]: I0216 17:06:53.865132 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s487d" Feb 16 17:06:53 crc kubenswrapper[4870]: I0216 17:06:53.883500 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" podUID="e6bf0f44-e205-4b3c-8360-a9578c67459f" containerName="registry" containerID="cri-o://f3f558d593d108e138747a264274f3c88244fb746de4bc8bda93aa3003706909" gracePeriod=30 Feb 16 17:06:53 crc kubenswrapper[4870]: I0216 17:06:53.920428 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s487d" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.085452 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sz5lq" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.085516 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sz5lq" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.140079 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sz5lq" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.292853 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.316323 4870 generic.go:334] "Generic (PLEG): container finished" podID="e6bf0f44-e205-4b3c-8360-a9578c67459f" containerID="f3f558d593d108e138747a264274f3c88244fb746de4bc8bda93aa3003706909" exitCode=0 Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.316563 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" event={"ID":"e6bf0f44-e205-4b3c-8360-a9578c67459f","Type":"ContainerDied","Data":"f3f558d593d108e138747a264274f3c88244fb746de4bc8bda93aa3003706909"} Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.316598 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" event={"ID":"e6bf0f44-e205-4b3c-8360-a9578c67459f","Type":"ContainerDied","Data":"f15deb91dc680e9f0ded4deea9cc421d10367fb62eb2482d9db355d926a0d5bc"} Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.316616 4870 scope.go:117] "RemoveContainer" containerID="f3f558d593d108e138747a264274f3c88244fb746de4bc8bda93aa3003706909" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.317647 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cf6bm" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.335266 4870 scope.go:117] "RemoveContainer" containerID="f3f558d593d108e138747a264274f3c88244fb746de4bc8bda93aa3003706909" Feb 16 17:06:54 crc kubenswrapper[4870]: E0216 17:06:54.335839 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3f558d593d108e138747a264274f3c88244fb746de4bc8bda93aa3003706909\": container with ID starting with f3f558d593d108e138747a264274f3c88244fb746de4bc8bda93aa3003706909 not found: ID does not exist" containerID="f3f558d593d108e138747a264274f3c88244fb746de4bc8bda93aa3003706909" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.335876 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3f558d593d108e138747a264274f3c88244fb746de4bc8bda93aa3003706909"} err="failed to get container status \"f3f558d593d108e138747a264274f3c88244fb746de4bc8bda93aa3003706909\": rpc error: code = NotFound desc = could not find container \"f3f558d593d108e138747a264274f3c88244fb746de4bc8bda93aa3003706909\": container with ID starting with f3f558d593d108e138747a264274f3c88244fb746de4bc8bda93aa3003706909 not found: ID does not exist" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.355336 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sz5lq" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.356090 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s487d" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.416860 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e6bf0f44-e205-4b3c-8360-a9578c67459f-ca-trust-extracted\") pod \"e6bf0f44-e205-4b3c-8360-a9578c67459f\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.419606 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"e6bf0f44-e205-4b3c-8360-a9578c67459f\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.419708 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6bf0f44-e205-4b3c-8360-a9578c67459f-trusted-ca\") pod \"e6bf0f44-e205-4b3c-8360-a9578c67459f\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.419764 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e6bf0f44-e205-4b3c-8360-a9578c67459f-registry-certificates\") pod \"e6bf0f44-e205-4b3c-8360-a9578c67459f\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.419794 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e6bf0f44-e205-4b3c-8360-a9578c67459f-registry-tls\") pod \"e6bf0f44-e205-4b3c-8360-a9578c67459f\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.419879 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e6bf0f44-e205-4b3c-8360-a9578c67459f-installation-pull-secrets\") pod \"e6bf0f44-e205-4b3c-8360-a9578c67459f\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.419957 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6bf0f44-e205-4b3c-8360-a9578c67459f-bound-sa-token\") pod \"e6bf0f44-e205-4b3c-8360-a9578c67459f\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.419991 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdd2g\" (UniqueName: \"kubernetes.io/projected/e6bf0f44-e205-4b3c-8360-a9578c67459f-kube-api-access-tdd2g\") pod \"e6bf0f44-e205-4b3c-8360-a9578c67459f\" (UID: \"e6bf0f44-e205-4b3c-8360-a9578c67459f\") " Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.421980 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6bf0f44-e205-4b3c-8360-a9578c67459f-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "e6bf0f44-e205-4b3c-8360-a9578c67459f" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.424232 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6bf0f44-e205-4b3c-8360-a9578c67459f-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "e6bf0f44-e205-4b3c-8360-a9578c67459f" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.428083 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6bf0f44-e205-4b3c-8360-a9578c67459f-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "e6bf0f44-e205-4b3c-8360-a9578c67459f" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.431647 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6bf0f44-e205-4b3c-8360-a9578c67459f-kube-api-access-tdd2g" (OuterVolumeSpecName: "kube-api-access-tdd2g") pod "e6bf0f44-e205-4b3c-8360-a9578c67459f" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f"). InnerVolumeSpecName "kube-api-access-tdd2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.431659 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6bf0f44-e205-4b3c-8360-a9578c67459f-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "e6bf0f44-e205-4b3c-8360-a9578c67459f" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.432034 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6bf0f44-e205-4b3c-8360-a9578c67459f-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "e6bf0f44-e205-4b3c-8360-a9578c67459f" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.438671 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "e6bf0f44-e205-4b3c-8360-a9578c67459f" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.440377 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6bf0f44-e205-4b3c-8360-a9578c67459f-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "e6bf0f44-e205-4b3c-8360-a9578c67459f" (UID: "e6bf0f44-e205-4b3c-8360-a9578c67459f"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.522494 4870 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6bf0f44-e205-4b3c-8360-a9578c67459f-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.522539 4870 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e6bf0f44-e205-4b3c-8360-a9578c67459f-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.522554 4870 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e6bf0f44-e205-4b3c-8360-a9578c67459f-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.522567 4870 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e6bf0f44-e205-4b3c-8360-a9578c67459f-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.522580 4870 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6bf0f44-e205-4b3c-8360-a9578c67459f-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.522592 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdd2g\" (UniqueName: \"kubernetes.io/projected/e6bf0f44-e205-4b3c-8360-a9578c67459f-kube-api-access-tdd2g\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.522603 4870 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e6bf0f44-e205-4b3c-8360-a9578c67459f-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.658266 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cf6bm"] Feb 16 17:06:54 crc kubenswrapper[4870]: I0216 17:06:54.672966 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cf6bm"] Feb 16 17:06:56 crc kubenswrapper[4870]: I0216 17:06:56.229645 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6bf0f44-e205-4b3c-8360-a9578c67459f" path="/var/lib/kubelet/pods/e6bf0f44-e205-4b3c-8360-a9578c67459f/volumes" Feb 16 17:08:35 crc kubenswrapper[4870]: I0216 17:08:35.366507 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:08:35 crc kubenswrapper[4870]: I0216 17:08:35.366928 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:08:56 crc kubenswrapper[4870]: I0216 17:08:56.457636 4870 scope.go:117] "RemoveContainer" containerID="889871adcacd8d65757b1085f48371e28562dae726c037d686b616dc3bce395c" Feb 16 17:09:05 crc kubenswrapper[4870]: I0216 17:09:05.366577 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:09:05 crc kubenswrapper[4870]: I0216 17:09:05.367320 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:09:35 crc kubenswrapper[4870]: I0216 17:09:35.366816 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:09:35 crc kubenswrapper[4870]: I0216 17:09:35.369107 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:09:35 crc kubenswrapper[4870]: I0216 17:09:35.369258 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:09:35 crc kubenswrapper[4870]: I0216 17:09:35.370025 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7ca0ad42a015d4e082134ca747039ba8a51f27cc8bcf372698a3dfdcb0045da"} pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:09:35 crc kubenswrapper[4870]: I0216 17:09:35.370187 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" containerID="cri-o://d7ca0ad42a015d4e082134ca747039ba8a51f27cc8bcf372698a3dfdcb0045da" gracePeriod=600 Feb 16 17:09:36 crc kubenswrapper[4870]: I0216 17:09:36.323589 4870 generic.go:334] "Generic (PLEG): container finished" podID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerID="d7ca0ad42a015d4e082134ca747039ba8a51f27cc8bcf372698a3dfdcb0045da" exitCode=0 Feb 16 17:09:36 crc kubenswrapper[4870]: I0216 17:09:36.323695 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerDied","Data":"d7ca0ad42a015d4e082134ca747039ba8a51f27cc8bcf372698a3dfdcb0045da"} Feb 16 17:09:36 crc kubenswrapper[4870]: I0216 17:09:36.324345 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerStarted","Data":"02e7dc6801b04294cf296b6adc3615ad5492b082048d6e70fbe5ab1eef8f5cb4"} Feb 16 17:09:36 crc kubenswrapper[4870]: I0216 17:09:36.324381 4870 scope.go:117] "RemoveContainer" containerID="563d08ece6d8d03837c0e89113bb97e1e95888579fc4a7e6ea7811bf1591b1d0" Feb 16 17:09:56 crc kubenswrapper[4870]: I0216 17:09:56.499564 4870 scope.go:117] "RemoveContainer" containerID="e744686d7f4aad132f77b8a00b47f245b1df26243168fb90b1c56f1e657211d9" Feb 16 17:09:56 crc kubenswrapper[4870]: I0216 17:09:56.522511 4870 scope.go:117] "RemoveContainer" containerID="1e5ca1a8d9c93226039adbaa9c16cc11a34f4c36056354dd1f0b7ddcd7616c1e" Feb 16 17:11:32 crc kubenswrapper[4870]: I0216 17:11:32.643250 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r"] Feb 16 17:11:32 crc kubenswrapper[4870]: E0216 17:11:32.644024 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6bf0f44-e205-4b3c-8360-a9578c67459f" containerName="registry" Feb 16 17:11:32 crc kubenswrapper[4870]: I0216 17:11:32.644037 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6bf0f44-e205-4b3c-8360-a9578c67459f" containerName="registry" Feb 16 17:11:32 crc kubenswrapper[4870]: I0216 17:11:32.644129 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6bf0f44-e205-4b3c-8360-a9578c67459f" containerName="registry" Feb 16 17:11:32 crc kubenswrapper[4870]: I0216 17:11:32.644818 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r" Feb 16 17:11:32 crc kubenswrapper[4870]: I0216 17:11:32.646827 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 17:11:32 crc kubenswrapper[4870]: I0216 17:11:32.661770 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r"] Feb 16 17:11:32 crc kubenswrapper[4870]: I0216 17:11:32.809485 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r\" (UID: \"ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r" Feb 16 17:11:32 crc kubenswrapper[4870]: I0216 17:11:32.809527 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r\" (UID: \"ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r" Feb 16 17:11:32 crc kubenswrapper[4870]: I0216 17:11:32.809608 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4768\" (UniqueName: \"kubernetes.io/projected/ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb-kube-api-access-l4768\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r\" (UID: \"ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r" Feb 16 17:11:32 crc kubenswrapper[4870]: I0216 17:11:32.910896 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r\" (UID: \"ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r" Feb 16 17:11:32 crc kubenswrapper[4870]: I0216 17:11:32.911106 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r\" (UID: \"ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r" Feb 16 17:11:32 crc kubenswrapper[4870]: I0216 17:11:32.911220 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4768\" (UniqueName: \"kubernetes.io/projected/ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb-kube-api-access-l4768\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r\" (UID: \"ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r" Feb 16 17:11:32 crc kubenswrapper[4870]: I0216 17:11:32.911429 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r\" (UID: \"ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r" Feb 16 17:11:32 crc kubenswrapper[4870]: I0216 17:11:32.911619 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r\" (UID: \"ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r" Feb 16 17:11:32 crc kubenswrapper[4870]: I0216 17:11:32.931602 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4768\" (UniqueName: \"kubernetes.io/projected/ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb-kube-api-access-l4768\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r\" (UID: \"ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r" Feb 16 17:11:33 crc kubenswrapper[4870]: I0216 17:11:33.015124 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r" Feb 16 17:11:33 crc kubenswrapper[4870]: I0216 17:11:33.215326 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r"] Feb 16 17:11:34 crc kubenswrapper[4870]: I0216 17:11:34.084290 4870 generic.go:334] "Generic (PLEG): container finished" podID="ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb" containerID="b3565c414a2aa1ac3c30e935726a6e2a34458da28aef04bf0ae31b46e432483b" exitCode=0 Feb 16 17:11:34 crc kubenswrapper[4870]: I0216 17:11:34.084378 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r" event={"ID":"ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb","Type":"ContainerDied","Data":"b3565c414a2aa1ac3c30e935726a6e2a34458da28aef04bf0ae31b46e432483b"} Feb 16 17:11:34 crc kubenswrapper[4870]: I0216 17:11:34.084636 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r" event={"ID":"ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb","Type":"ContainerStarted","Data":"7d3e3a93f9e1b7aab1d35868c45e8801ba05b7e25b4f60d699845b6ef1d13e54"} Feb 16 17:11:34 crc kubenswrapper[4870]: I0216 17:11:34.087732 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:11:35 crc kubenswrapper[4870]: I0216 17:11:35.367351 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:11:35 crc kubenswrapper[4870]: I0216 17:11:35.367638 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:11:36 crc kubenswrapper[4870]: I0216 17:11:36.098524 4870 generic.go:334] "Generic (PLEG): container finished" podID="ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb" containerID="7177860545a1142a77d1e57ac4b5ccd80d679d3dc979fa102fba572b37d60069" exitCode=0 Feb 16 17:11:36 crc kubenswrapper[4870]: I0216 17:11:36.098577 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r" event={"ID":"ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb","Type":"ContainerDied","Data":"7177860545a1142a77d1e57ac4b5ccd80d679d3dc979fa102fba572b37d60069"} Feb 16 17:11:37 crc kubenswrapper[4870]: I0216 17:11:37.105891 4870 generic.go:334] "Generic (PLEG): container finished" podID="ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb" containerID="ba9bd32a05f571ddc1adac0ffa7118589414bc9e21117b0f8f61ca94f2909141" exitCode=0 Feb 16 17:11:37 crc kubenswrapper[4870]: I0216 17:11:37.105982 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r" event={"ID":"ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb","Type":"ContainerDied","Data":"ba9bd32a05f571ddc1adac0ffa7118589414bc9e21117b0f8f61ca94f2909141"} Feb 16 17:11:38 crc kubenswrapper[4870]: I0216 17:11:38.406059 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r" Feb 16 17:11:38 crc kubenswrapper[4870]: I0216 17:11:38.488026 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb-bundle\") pod \"ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb\" (UID: \"ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb\") " Feb 16 17:11:38 crc kubenswrapper[4870]: I0216 17:11:38.488089 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb-util\") pod \"ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb\" (UID: \"ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb\") " Feb 16 17:11:38 crc kubenswrapper[4870]: I0216 17:11:38.488125 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4768\" (UniqueName: \"kubernetes.io/projected/ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb-kube-api-access-l4768\") pod \"ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb\" (UID: \"ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb\") " Feb 16 17:11:38 crc kubenswrapper[4870]: I0216 17:11:38.491397 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb-bundle" (OuterVolumeSpecName: "bundle") pod "ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb" (UID: "ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:11:38 crc kubenswrapper[4870]: I0216 17:11:38.500523 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb-kube-api-access-l4768" (OuterVolumeSpecName: "kube-api-access-l4768") pod "ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb" (UID: "ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb"). InnerVolumeSpecName "kube-api-access-l4768". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:11:38 crc kubenswrapper[4870]: I0216 17:11:38.589933 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4768\" (UniqueName: \"kubernetes.io/projected/ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb-kube-api-access-l4768\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:38 crc kubenswrapper[4870]: I0216 17:11:38.590016 4870 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:38 crc kubenswrapper[4870]: I0216 17:11:38.856521 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb-util" (OuterVolumeSpecName: "util") pod "ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb" (UID: "ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:11:38 crc kubenswrapper[4870]: I0216 17:11:38.892498 4870 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb-util\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:39 crc kubenswrapper[4870]: I0216 17:11:39.122059 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r" event={"ID":"ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb","Type":"ContainerDied","Data":"7d3e3a93f9e1b7aab1d35868c45e8801ba05b7e25b4f60d699845b6ef1d13e54"} Feb 16 17:11:39 crc kubenswrapper[4870]: I0216 17:11:39.122099 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d3e3a93f9e1b7aab1d35868c45e8801ba05b7e25b4f60d699845b6ef1d13e54" Feb 16 17:11:39 crc kubenswrapper[4870]: I0216 17:11:39.122161 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r" Feb 16 17:11:48 crc kubenswrapper[4870]: I0216 17:11:48.852055 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-drrrv"] Feb 16 17:11:48 crc kubenswrapper[4870]: I0216 17:11:48.853376 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovn-controller" containerID="cri-o://29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f" gracePeriod=30 Feb 16 17:11:48 crc kubenswrapper[4870]: I0216 17:11:48.853853 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="sbdb" containerID="cri-o://ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5" gracePeriod=30 Feb 16 17:11:48 crc kubenswrapper[4870]: I0216 17:11:48.853999 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="nbdb" containerID="cri-o://a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764" gracePeriod=30 Feb 16 17:11:48 crc kubenswrapper[4870]: I0216 17:11:48.854038 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="northd" containerID="cri-o://216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b" gracePeriod=30 Feb 16 17:11:48 crc kubenswrapper[4870]: I0216 17:11:48.854078 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2" gracePeriod=30 Feb 16 17:11:48 crc kubenswrapper[4870]: I0216 17:11:48.854113 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="kube-rbac-proxy-node" containerID="cri-o://1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100" gracePeriod=30 Feb 16 17:11:48 crc kubenswrapper[4870]: I0216 17:11:48.854164 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovn-acl-logging" containerID="cri-o://cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e" gracePeriod=30 Feb 16 17:11:48 crc kubenswrapper[4870]: I0216 17:11:48.944004 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovnkube-controller" containerID="cri-o://56684ef2f5624504b03a4bdb4ab63cce4c9921ce5de95801542a0875054b1bc4" gracePeriod=30 Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.196577 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jjq54_52f144f1-d0b6-4871-a439-6aaf51304c4b/kube-multus/2.log" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.202721 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jjq54_52f144f1-d0b6-4871-a439-6aaf51304c4b/kube-multus/1.log" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.202789 4870 generic.go:334] "Generic (PLEG): container finished" podID="52f144f1-d0b6-4871-a439-6aaf51304c4b" containerID="b0a335f8947cdf12560eade87cdde71ba410a4fb308365d680fba7d66dfa88b5" exitCode=2 Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.202867 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jjq54" event={"ID":"52f144f1-d0b6-4871-a439-6aaf51304c4b","Type":"ContainerDied","Data":"b0a335f8947cdf12560eade87cdde71ba410a4fb308365d680fba7d66dfa88b5"} Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.202921 4870 scope.go:117] "RemoveContainer" containerID="f967b353df1fd88fc135755f8ab9d8f0b1dedc8b005d09f4191f821eccb01bdb" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.203636 4870 scope.go:117] "RemoveContainer" containerID="b0a335f8947cdf12560eade87cdde71ba410a4fb308365d680fba7d66dfa88b5" Feb 16 17:11:49 crc kubenswrapper[4870]: E0216 17:11:49.203902 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-jjq54_openshift-multus(52f144f1-d0b6-4871-a439-6aaf51304c4b)\"" pod="openshift-multus/multus-jjq54" podUID="52f144f1-d0b6-4871-a439-6aaf51304c4b" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.212521 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovnkube-controller/3.log" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.216477 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovn-acl-logging/0.log" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.217085 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovn-controller/0.log" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.218018 4870 generic.go:334] "Generic (PLEG): container finished" podID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerID="56684ef2f5624504b03a4bdb4ab63cce4c9921ce5de95801542a0875054b1bc4" exitCode=0 Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.218087 4870 generic.go:334] "Generic (PLEG): container finished" podID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerID="216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b" exitCode=0 Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.218082 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerDied","Data":"56684ef2f5624504b03a4bdb4ab63cce4c9921ce5de95801542a0875054b1bc4"} Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.218106 4870 generic.go:334] "Generic (PLEG): container finished" podID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerID="026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2" exitCode=0 Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.218120 4870 generic.go:334] "Generic (PLEG): container finished" podID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerID="1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100" exitCode=0 Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.218130 4870 generic.go:334] "Generic (PLEG): container finished" podID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerID="cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e" exitCode=143 Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.218134 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerDied","Data":"216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b"} Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.218143 4870 generic.go:334] "Generic (PLEG): container finished" podID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerID="29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f" exitCode=143 Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.218152 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerDied","Data":"026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2"} Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.218166 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerDied","Data":"1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100"} Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.218179 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerDied","Data":"cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e"} Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.218191 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerDied","Data":"29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f"} Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.237863 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovnkube-controller/3.log" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.241970 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovn-acl-logging/0.log" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.242498 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovn-controller/0.log" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.242930 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.246642 4870 scope.go:117] "RemoveContainer" containerID="5b5256536b28a974ba56223677edeff840dc94d3e8dbbbe1c03b09c1e82f6ffb" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.304709 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cxq6t"] Feb 16 17:11:49 crc kubenswrapper[4870]: E0216 17:11:49.305305 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovn-controller" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305328 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovn-controller" Feb 16 17:11:49 crc kubenswrapper[4870]: E0216 17:11:49.305347 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb" containerName="util" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305354 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb" containerName="util" Feb 16 17:11:49 crc kubenswrapper[4870]: E0216 17:11:49.305363 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="sbdb" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305371 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="sbdb" Feb 16 17:11:49 crc kubenswrapper[4870]: E0216 17:11:49.305386 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305392 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 17:11:49 crc kubenswrapper[4870]: E0216 17:11:49.305400 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="northd" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305406 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="northd" Feb 16 17:11:49 crc kubenswrapper[4870]: E0216 17:11:49.305419 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovnkube-controller" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305426 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovnkube-controller" Feb 16 17:11:49 crc kubenswrapper[4870]: E0216 17:11:49.305436 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovn-acl-logging" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305443 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovn-acl-logging" Feb 16 17:11:49 crc kubenswrapper[4870]: E0216 17:11:49.305456 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovnkube-controller" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305463 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovnkube-controller" Feb 16 17:11:49 crc kubenswrapper[4870]: E0216 17:11:49.305476 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="kubecfg-setup" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305483 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="kubecfg-setup" Feb 16 17:11:49 crc kubenswrapper[4870]: E0216 17:11:49.305492 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb" containerName="pull" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305498 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb" containerName="pull" Feb 16 17:11:49 crc kubenswrapper[4870]: E0216 17:11:49.305510 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb" containerName="extract" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305518 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb" containerName="extract" Feb 16 17:11:49 crc kubenswrapper[4870]: E0216 17:11:49.305527 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="kube-rbac-proxy-node" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305534 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="kube-rbac-proxy-node" Feb 16 17:11:49 crc kubenswrapper[4870]: E0216 17:11:49.305549 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="nbdb" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305555 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="nbdb" Feb 16 17:11:49 crc kubenswrapper[4870]: E0216 17:11:49.305564 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovnkube-controller" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305571 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovnkube-controller" Feb 16 17:11:49 crc kubenswrapper[4870]: E0216 17:11:49.305583 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovnkube-controller" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305590 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovnkube-controller" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305758 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovnkube-controller" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305770 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovnkube-controller" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305782 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovn-controller" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305795 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305801 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="sbdb" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305815 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="northd" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305825 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovn-acl-logging" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305833 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb" containerName="extract" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305843 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="kube-rbac-proxy-node" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305849 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovnkube-controller" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.305860 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="nbdb" Feb 16 17:11:49 crc kubenswrapper[4870]: E0216 17:11:49.306074 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovnkube-controller" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.306085 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovnkube-controller" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.306245 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovnkube-controller" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.306253 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerName="ovnkube-controller" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.310663 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.360722 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-slash\") pod \"650bce90-73d6-474d-ab19-f50252dc8bc3\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.360804 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-var-lib-openvswitch\") pod \"650bce90-73d6-474d-ab19-f50252dc8bc3\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.360843 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmshf\" (UniqueName: \"kubernetes.io/projected/650bce90-73d6-474d-ab19-f50252dc8bc3-kube-api-access-dmshf\") pod \"650bce90-73d6-474d-ab19-f50252dc8bc3\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.360882 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/650bce90-73d6-474d-ab19-f50252dc8bc3-env-overrides\") pod \"650bce90-73d6-474d-ab19-f50252dc8bc3\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.360874 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-slash" (OuterVolumeSpecName: "host-slash") pod "650bce90-73d6-474d-ab19-f50252dc8bc3" (UID: "650bce90-73d6-474d-ab19-f50252dc8bc3"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.360909 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-kubelet\") pod \"650bce90-73d6-474d-ab19-f50252dc8bc3\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.360999 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "650bce90-73d6-474d-ab19-f50252dc8bc3" (UID: "650bce90-73d6-474d-ab19-f50252dc8bc3"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361024 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "650bce90-73d6-474d-ab19-f50252dc8bc3" (UID: "650bce90-73d6-474d-ab19-f50252dc8bc3"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361003 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-systemd-units\") pod \"650bce90-73d6-474d-ab19-f50252dc8bc3\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361070 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-log-socket\") pod \"650bce90-73d6-474d-ab19-f50252dc8bc3\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361041 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "650bce90-73d6-474d-ab19-f50252dc8bc3" (UID: "650bce90-73d6-474d-ab19-f50252dc8bc3"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361095 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-run-netns\") pod \"650bce90-73d6-474d-ab19-f50252dc8bc3\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361116 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-run-ovn\") pod \"650bce90-73d6-474d-ab19-f50252dc8bc3\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361133 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-node-log\") pod \"650bce90-73d6-474d-ab19-f50252dc8bc3\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361136 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-log-socket" (OuterVolumeSpecName: "log-socket") pod "650bce90-73d6-474d-ab19-f50252dc8bc3" (UID: "650bce90-73d6-474d-ab19-f50252dc8bc3"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361165 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-etc-openvswitch\") pod \"650bce90-73d6-474d-ab19-f50252dc8bc3\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361180 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-run-systemd\") pod \"650bce90-73d6-474d-ab19-f50252dc8bc3\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361197 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"650bce90-73d6-474d-ab19-f50252dc8bc3\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361219 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/650bce90-73d6-474d-ab19-f50252dc8bc3-ovnkube-config\") pod \"650bce90-73d6-474d-ab19-f50252dc8bc3\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361237 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-run-ovn-kubernetes\") pod \"650bce90-73d6-474d-ab19-f50252dc8bc3\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361267 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/650bce90-73d6-474d-ab19-f50252dc8bc3-ovn-node-metrics-cert\") pod \"650bce90-73d6-474d-ab19-f50252dc8bc3\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361301 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/650bce90-73d6-474d-ab19-f50252dc8bc3-ovnkube-script-lib\") pod \"650bce90-73d6-474d-ab19-f50252dc8bc3\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361355 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-run-openvswitch\") pod \"650bce90-73d6-474d-ab19-f50252dc8bc3\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361370 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-cni-bin\") pod \"650bce90-73d6-474d-ab19-f50252dc8bc3\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361398 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-cni-netd\") pod \"650bce90-73d6-474d-ab19-f50252dc8bc3\" (UID: \"650bce90-73d6-474d-ab19-f50252dc8bc3\") " Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361568 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/650bce90-73d6-474d-ab19-f50252dc8bc3-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "650bce90-73d6-474d-ab19-f50252dc8bc3" (UID: "650bce90-73d6-474d-ab19-f50252dc8bc3"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361606 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "650bce90-73d6-474d-ab19-f50252dc8bc3" (UID: "650bce90-73d6-474d-ab19-f50252dc8bc3"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361617 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-log-socket\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361639 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "650bce90-73d6-474d-ab19-f50252dc8bc3" (UID: "650bce90-73d6-474d-ab19-f50252dc8bc3"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361682 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "650bce90-73d6-474d-ab19-f50252dc8bc3" (UID: "650bce90-73d6-474d-ab19-f50252dc8bc3"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361742 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-node-log" (OuterVolumeSpecName: "node-log") pod "650bce90-73d6-474d-ab19-f50252dc8bc3" (UID: "650bce90-73d6-474d-ab19-f50252dc8bc3"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361772 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "650bce90-73d6-474d-ab19-f50252dc8bc3" (UID: "650bce90-73d6-474d-ab19-f50252dc8bc3"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361644 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-systemd-units\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361924 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "650bce90-73d6-474d-ab19-f50252dc8bc3" (UID: "650bce90-73d6-474d-ab19-f50252dc8bc3"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361984 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "650bce90-73d6-474d-ab19-f50252dc8bc3" (UID: "650bce90-73d6-474d-ab19-f50252dc8bc3"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.361995 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "650bce90-73d6-474d-ab19-f50252dc8bc3" (UID: "650bce90-73d6-474d-ab19-f50252dc8bc3"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.362019 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "650bce90-73d6-474d-ab19-f50252dc8bc3" (UID: "650bce90-73d6-474d-ab19-f50252dc8bc3"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.362131 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-host-slash\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.362233 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-run-openvswitch\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.362270 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-host-cni-netd\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.362355 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2f274975-9184-4cc3-b4d0-c9e11be6a300-ovnkube-script-lib\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.362401 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-host-run-netns\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.362448 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-etc-openvswitch\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.362482 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/650bce90-73d6-474d-ab19-f50252dc8bc3-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "650bce90-73d6-474d-ab19-f50252dc8bc3" (UID: "650bce90-73d6-474d-ab19-f50252dc8bc3"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.362507 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-host-kubelet\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.362590 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-run-systemd\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.362631 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-run-ovn\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.362659 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2f274975-9184-4cc3-b4d0-c9e11be6a300-ovn-node-metrics-cert\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.362697 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-node-log\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.362716 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-host-run-ovn-kubernetes\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.362733 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/650bce90-73d6-474d-ab19-f50252dc8bc3-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "650bce90-73d6-474d-ab19-f50252dc8bc3" (UID: "650bce90-73d6-474d-ab19-f50252dc8bc3"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.362740 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.362765 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2f274975-9184-4cc3-b4d0-c9e11be6a300-env-overrides\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.362785 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2f274975-9184-4cc3-b4d0-c9e11be6a300-ovnkube-config\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.362876 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-var-lib-openvswitch\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.363007 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-host-cni-bin\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.363048 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbncf\" (UniqueName: \"kubernetes.io/projected/2f274975-9184-4cc3-b4d0-c9e11be6a300-kube-api-access-lbncf\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.363153 4870 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.363173 4870 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.363185 4870 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-log-socket\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.363197 4870 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.363209 4870 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.363220 4870 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-node-log\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.363231 4870 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.363245 4870 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.363259 4870 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/650bce90-73d6-474d-ab19-f50252dc8bc3-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.363272 4870 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.363284 4870 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/650bce90-73d6-474d-ab19-f50252dc8bc3-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.363295 4870 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.363306 4870 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.363318 4870 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.363329 4870 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-host-slash\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.363344 4870 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.363355 4870 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/650bce90-73d6-474d-ab19-f50252dc8bc3-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.367738 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650bce90-73d6-474d-ab19-f50252dc8bc3-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "650bce90-73d6-474d-ab19-f50252dc8bc3" (UID: "650bce90-73d6-474d-ab19-f50252dc8bc3"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.368097 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/650bce90-73d6-474d-ab19-f50252dc8bc3-kube-api-access-dmshf" (OuterVolumeSpecName: "kube-api-access-dmshf") pod "650bce90-73d6-474d-ab19-f50252dc8bc3" (UID: "650bce90-73d6-474d-ab19-f50252dc8bc3"). InnerVolumeSpecName "kube-api-access-dmshf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.377248 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "650bce90-73d6-474d-ab19-f50252dc8bc3" (UID: "650bce90-73d6-474d-ab19-f50252dc8bc3"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.464990 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-host-run-ovn-kubernetes\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465052 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465085 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2f274975-9184-4cc3-b4d0-c9e11be6a300-env-overrides\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465114 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2f274975-9184-4cc3-b4d0-c9e11be6a300-ovnkube-config\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465142 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-var-lib-openvswitch\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465184 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-host-cni-bin\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465207 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbncf\" (UniqueName: \"kubernetes.io/projected/2f274975-9184-4cc3-b4d0-c9e11be6a300-kube-api-access-lbncf\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465192 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-host-run-ovn-kubernetes\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465282 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-log-socket\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465233 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-log-socket\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465326 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465356 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-systemd-units\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465426 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-host-slash\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465448 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-run-openvswitch\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465473 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-host-cni-netd\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465525 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2f274975-9184-4cc3-b4d0-c9e11be6a300-ovnkube-script-lib\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465553 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-host-run-netns\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465592 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-etc-openvswitch\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465656 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-host-kubelet\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465685 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-run-systemd\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465722 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-run-ovn\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465741 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2f274975-9184-4cc3-b4d0-c9e11be6a300-ovn-node-metrics-cert\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465783 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-node-log\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465862 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmshf\" (UniqueName: \"kubernetes.io/projected/650bce90-73d6-474d-ab19-f50252dc8bc3-kube-api-access-dmshf\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465886 4870 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/650bce90-73d6-474d-ab19-f50252dc8bc3-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465896 4870 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/650bce90-73d6-474d-ab19-f50252dc8bc3-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.465933 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-node-log\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.466042 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-systemd-units\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.466045 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2f274975-9184-4cc3-b4d0-c9e11be6a300-env-overrides\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.466068 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-host-slash\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.466098 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-run-openvswitch\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.466123 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-host-cni-netd\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.466676 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2f274975-9184-4cc3-b4d0-c9e11be6a300-ovnkube-config\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.466745 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-var-lib-openvswitch\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.466780 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-host-cni-bin\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.466886 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2f274975-9184-4cc3-b4d0-c9e11be6a300-ovnkube-script-lib\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.466934 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-host-run-netns\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.466977 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-etc-openvswitch\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.467005 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-host-kubelet\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.467025 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-run-systemd\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.467051 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f274975-9184-4cc3-b4d0-c9e11be6a300-run-ovn\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.470927 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2f274975-9184-4cc3-b4d0-c9e11be6a300-ovn-node-metrics-cert\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.488486 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbncf\" (UniqueName: \"kubernetes.io/projected/2f274975-9184-4cc3-b4d0-c9e11be6a300-kube-api-access-lbncf\") pod \"ovnkube-node-cxq6t\" (UID: \"2f274975-9184-4cc3-b4d0-c9e11be6a300\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:49 crc kubenswrapper[4870]: I0216 17:11:49.632122 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.162733 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-v797k"] Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.164529 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.166887 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.167097 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.167494 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-5m8m6" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.224777 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jjq54_52f144f1-d0b6-4871-a439-6aaf51304c4b/kube-multus/2.log" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.225962 4870 generic.go:334] "Generic (PLEG): container finished" podID="2f274975-9184-4cc3-b4d0-c9e11be6a300" containerID="c153d8d448f4b9520cf678a112fd1e983ddbbb0843810f60367961bdbc21b396" exitCode=0 Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.229813 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" event={"ID":"2f274975-9184-4cc3-b4d0-c9e11be6a300","Type":"ContainerDied","Data":"c153d8d448f4b9520cf678a112fd1e983ddbbb0843810f60367961bdbc21b396"} Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.229859 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" event={"ID":"2f274975-9184-4cc3-b4d0-c9e11be6a300","Type":"ContainerStarted","Data":"cf3455e090cc8f9819e7db8349e1f5f2ce9231de1e7699d95bb1b215d5cb72d2"} Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.229877 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x"] Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.230675 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.236522 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovn-acl-logging/0.log" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.237412 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-drrrv_650bce90-73d6-474d-ab19-f50252dc8bc3/ovn-controller/0.log" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.237831 4870 generic.go:334] "Generic (PLEG): container finished" podID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerID="ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5" exitCode=0 Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.237854 4870 generic.go:334] "Generic (PLEG): container finished" podID="650bce90-73d6-474d-ab19-f50252dc8bc3" containerID="a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764" exitCode=0 Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.237881 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerDied","Data":"ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5"} Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.237902 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerDied","Data":"a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764"} Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.237918 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs"] Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.237980 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.238027 4870 scope.go:117] "RemoveContainer" containerID="56684ef2f5624504b03a4bdb4ab63cce4c9921ce5de95801542a0875054b1bc4" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.238749 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drrrv" event={"ID":"650bce90-73d6-474d-ab19-f50252dc8bc3","Type":"ContainerDied","Data":"4d37f852f5f2aebc6de9e0be323618461948d9480fe2bb3f27d8d9c00c61ba19"} Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.238809 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.240778 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.244144 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-vnfw4" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.269632 4870 scope.go:117] "RemoveContainer" containerID="ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.275122 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/063bbc92-30f0-4cb3-9f15-8f303b2fe4d0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs\" (UID: \"063bbc92-30f0-4cb3-9f15-8f303b2fe4d0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.275175 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x\" (UID: \"10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.275197 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x\" (UID: \"10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.275274 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk7xq\" (UniqueName: \"kubernetes.io/projected/d26622cb-47bc-4378-867b-abad855869a5-kube-api-access-fk7xq\") pod \"obo-prometheus-operator-68bc856cb9-v797k\" (UID: \"d26622cb-47bc-4378-867b-abad855869a5\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.275508 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/063bbc92-30f0-4cb3-9f15-8f303b2fe4d0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs\" (UID: \"063bbc92-30f0-4cb3-9f15-8f303b2fe4d0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.311478 4870 scope.go:117] "RemoveContainer" containerID="a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.332823 4870 scope.go:117] "RemoveContainer" containerID="216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.368131 4870 scope.go:117] "RemoveContainer" containerID="026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.377470 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/063bbc92-30f0-4cb3-9f15-8f303b2fe4d0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs\" (UID: \"063bbc92-30f0-4cb3-9f15-8f303b2fe4d0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.377558 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/063bbc92-30f0-4cb3-9f15-8f303b2fe4d0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs\" (UID: \"063bbc92-30f0-4cb3-9f15-8f303b2fe4d0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.377586 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x\" (UID: \"10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.377605 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x\" (UID: \"10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.377634 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk7xq\" (UniqueName: \"kubernetes.io/projected/d26622cb-47bc-4378-867b-abad855869a5-kube-api-access-fk7xq\") pod \"obo-prometheus-operator-68bc856cb9-v797k\" (UID: \"d26622cb-47bc-4378-867b-abad855869a5\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.384625 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/063bbc92-30f0-4cb3-9f15-8f303b2fe4d0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs\" (UID: \"063bbc92-30f0-4cb3-9f15-8f303b2fe4d0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.396500 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x\" (UID: \"10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.401834 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x\" (UID: \"10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.402181 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/063bbc92-30f0-4cb3-9f15-8f303b2fe4d0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs\" (UID: \"063bbc92-30f0-4cb3-9f15-8f303b2fe4d0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.416058 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk7xq\" (UniqueName: \"kubernetes.io/projected/d26622cb-47bc-4378-867b-abad855869a5-kube-api-access-fk7xq\") pod \"obo-prometheus-operator-68bc856cb9-v797k\" (UID: \"d26622cb-47bc-4378-867b-abad855869a5\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.426814 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-drrrv"] Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.443111 4870 scope.go:117] "RemoveContainer" containerID="1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.450042 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-drrrv"] Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.462243 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7mxf6"] Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.463249 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.473716 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.474110 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-4hw46" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.481344 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.493279 4870 scope.go:117] "RemoveContainer" containerID="cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.518931 4870 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-v797k_openshift-operators_d26622cb-47bc-4378-867b-abad855869a5_0(49cf8be2f133d3663f231ea96307a4057d9d81d1786efd6fbae8da267b5b64b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.519021 4870 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-v797k_openshift-operators_d26622cb-47bc-4378-867b-abad855869a5_0(49cf8be2f133d3663f231ea96307a4057d9d81d1786efd6fbae8da267b5b64b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.519078 4870 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-v797k_openshift-operators_d26622cb-47bc-4378-867b-abad855869a5_0(49cf8be2f133d3663f231ea96307a4057d9d81d1786efd6fbae8da267b5b64b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.519142 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-v797k_openshift-operators(d26622cb-47bc-4378-867b-abad855869a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-v797k_openshift-operators(d26622cb-47bc-4378-867b-abad855869a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-v797k_openshift-operators_d26622cb-47bc-4378-867b-abad855869a5_0(49cf8be2f133d3663f231ea96307a4057d9d81d1786efd6fbae8da267b5b64b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" podUID="d26622cb-47bc-4378-867b-abad855869a5" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.528204 4870 scope.go:117] "RemoveContainer" containerID="29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.549358 4870 scope.go:117] "RemoveContainer" containerID="ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.562886 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.578336 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.579351 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7726c8d8-365d-4b95-9b6c-2c95c221f1f4-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7mxf6\" (UID: \"7726c8d8-365d-4b95-9b6c-2c95c221f1f4\") " pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.579377 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88lgj\" (UniqueName: \"kubernetes.io/projected/7726c8d8-365d-4b95-9b6c-2c95c221f1f4-kube-api-access-88lgj\") pod \"observability-operator-59bdc8b94-7mxf6\" (UID: \"7726c8d8-365d-4b95-9b6c-2c95c221f1f4\") " pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.593535 4870 scope.go:117] "RemoveContainer" containerID="56684ef2f5624504b03a4bdb4ab63cce4c9921ce5de95801542a0875054b1bc4" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.595439 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56684ef2f5624504b03a4bdb4ab63cce4c9921ce5de95801542a0875054b1bc4\": container with ID starting with 56684ef2f5624504b03a4bdb4ab63cce4c9921ce5de95801542a0875054b1bc4 not found: ID does not exist" containerID="56684ef2f5624504b03a4bdb4ab63cce4c9921ce5de95801542a0875054b1bc4" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.595478 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56684ef2f5624504b03a4bdb4ab63cce4c9921ce5de95801542a0875054b1bc4"} err="failed to get container status \"56684ef2f5624504b03a4bdb4ab63cce4c9921ce5de95801542a0875054b1bc4\": rpc error: code = NotFound desc = could not find container \"56684ef2f5624504b03a4bdb4ab63cce4c9921ce5de95801542a0875054b1bc4\": container with ID starting with 56684ef2f5624504b03a4bdb4ab63cce4c9921ce5de95801542a0875054b1bc4 not found: ID does not exist" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.595517 4870 scope.go:117] "RemoveContainer" containerID="ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.595807 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\": container with ID starting with ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5 not found: ID does not exist" containerID="ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.595831 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5"} err="failed to get container status \"ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\": rpc error: code = NotFound desc = could not find container \"ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\": container with ID starting with ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5 not found: ID does not exist" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.595851 4870 scope.go:117] "RemoveContainer" containerID="a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.596289 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\": container with ID starting with a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764 not found: ID does not exist" containerID="a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.596306 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764"} err="failed to get container status \"a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\": rpc error: code = NotFound desc = could not find container \"a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\": container with ID starting with a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764 not found: ID does not exist" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.596319 4870 scope.go:117] "RemoveContainer" containerID="216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.598534 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\": container with ID starting with 216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b not found: ID does not exist" containerID="216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.598556 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b"} err="failed to get container status \"216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\": rpc error: code = NotFound desc = could not find container \"216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\": container with ID starting with 216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b not found: ID does not exist" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.598575 4870 scope.go:117] "RemoveContainer" containerID="026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.600241 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\": container with ID starting with 026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2 not found: ID does not exist" containerID="026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.600301 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2"} err="failed to get container status \"026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\": rpc error: code = NotFound desc = could not find container \"026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\": container with ID starting with 026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2 not found: ID does not exist" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.600337 4870 scope.go:117] "RemoveContainer" containerID="1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.600693 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\": container with ID starting with 1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100 not found: ID does not exist" containerID="1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.600751 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100"} err="failed to get container status \"1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\": rpc error: code = NotFound desc = could not find container \"1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\": container with ID starting with 1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100 not found: ID does not exist" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.600781 4870 scope.go:117] "RemoveContainer" containerID="cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.601051 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\": container with ID starting with cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e not found: ID does not exist" containerID="cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.601079 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e"} err="failed to get container status \"cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\": rpc error: code = NotFound desc = could not find container \"cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\": container with ID starting with cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e not found: ID does not exist" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.601122 4870 scope.go:117] "RemoveContainer" containerID="29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.601292 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\": container with ID starting with 29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f not found: ID does not exist" containerID="29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.601313 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f"} err="failed to get container status \"29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\": rpc error: code = NotFound desc = could not find container \"29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\": container with ID starting with 29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f not found: ID does not exist" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.601325 4870 scope.go:117] "RemoveContainer" containerID="ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.601535 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\": container with ID starting with ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8 not found: ID does not exist" containerID="ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.601555 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8"} err="failed to get container status \"ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\": rpc error: code = NotFound desc = could not find container \"ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\": container with ID starting with ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8 not found: ID does not exist" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.601597 4870 scope.go:117] "RemoveContainer" containerID="56684ef2f5624504b03a4bdb4ab63cce4c9921ce5de95801542a0875054b1bc4" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.601762 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56684ef2f5624504b03a4bdb4ab63cce4c9921ce5de95801542a0875054b1bc4"} err="failed to get container status \"56684ef2f5624504b03a4bdb4ab63cce4c9921ce5de95801542a0875054b1bc4\": rpc error: code = NotFound desc = could not find container \"56684ef2f5624504b03a4bdb4ab63cce4c9921ce5de95801542a0875054b1bc4\": container with ID starting with 56684ef2f5624504b03a4bdb4ab63cce4c9921ce5de95801542a0875054b1bc4 not found: ID does not exist" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.601782 4870 scope.go:117] "RemoveContainer" containerID="ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.602003 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5"} err="failed to get container status \"ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\": rpc error: code = NotFound desc = could not find container \"ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5\": container with ID starting with ba98f7b272a959f7989c216be5b7abe5456bd5d31869fbce7b1b0ce131d662b5 not found: ID does not exist" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.602020 4870 scope.go:117] "RemoveContainer" containerID="a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.602199 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764"} err="failed to get container status \"a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\": rpc error: code = NotFound desc = could not find container \"a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764\": container with ID starting with a0c6545ba9ec08ca69c4a641963c71957898e17b98f995efec4c9debb1735764 not found: ID does not exist" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.602219 4870 scope.go:117] "RemoveContainer" containerID="216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.602406 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b"} err="failed to get container status \"216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\": rpc error: code = NotFound desc = could not find container \"216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b\": container with ID starting with 216755d8279976b35aff12f4ac6877db6550e24fb3bf19831168ba3030ebf93b not found: ID does not exist" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.602425 4870 scope.go:117] "RemoveContainer" containerID="026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.602625 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2"} err="failed to get container status \"026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\": rpc error: code = NotFound desc = could not find container \"026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2\": container with ID starting with 026b97a05d113dbe0b4a68516b7840ead04b6e7d847e9cf8bfa73d8c8b9ae3d2 not found: ID does not exist" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.602641 4870 scope.go:117] "RemoveContainer" containerID="1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.602812 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100"} err="failed to get container status \"1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\": rpc error: code = NotFound desc = could not find container \"1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100\": container with ID starting with 1c9670649969ba1eaf94f7e9154cce55cf24d68592072b8d02deb12105502100 not found: ID does not exist" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.602847 4870 scope.go:117] "RemoveContainer" containerID="cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.603001 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e"} err="failed to get container status \"cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\": rpc error: code = NotFound desc = could not find container \"cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e\": container with ID starting with cc8afe348167a924afc73475bb0460e375c86c35a465fb53443d7899c049101e not found: ID does not exist" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.603017 4870 scope.go:117] "RemoveContainer" containerID="29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.603168 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f"} err="failed to get container status \"29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\": rpc error: code = NotFound desc = could not find container \"29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f\": container with ID starting with 29549c50bdb1fa8ee0e82526cd22c757a17272cf53d71c50edb502893f2d0e6f not found: ID does not exist" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.603185 4870 scope.go:117] "RemoveContainer" containerID="ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.603320 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8"} err="failed to get container status \"ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\": rpc error: code = NotFound desc = could not find container \"ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8\": container with ID starting with ce2344b03663f79bccb5ac364937175c302d18504cca74b245f7e19673af8dc8 not found: ID does not exist" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.604751 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-pbwnb"] Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.605744 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.607667 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-tzptz" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.607801 4870 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x_openshift-operators_10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a_0(289a50f99001cbc41b38f4a064680e898c026eb0054d97a753cd447e83739f30): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.607848 4870 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x_openshift-operators_10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a_0(289a50f99001cbc41b38f4a064680e898c026eb0054d97a753cd447e83739f30): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.607870 4870 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x_openshift-operators_10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a_0(289a50f99001cbc41b38f4a064680e898c026eb0054d97a753cd447e83739f30): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.607913 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x_openshift-operators(10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x_openshift-operators(10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x_openshift-operators_10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a_0(289a50f99001cbc41b38f4a064680e898c026eb0054d97a753cd447e83739f30): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" podUID="10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.633226 4870 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs_openshift-operators_063bbc92-30f0-4cb3-9f15-8f303b2fe4d0_0(258986ec27f74dc6548ef4e76d53a207146f285f2483c89f1fa3c510a1f82e93): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.633321 4870 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs_openshift-operators_063bbc92-30f0-4cb3-9f15-8f303b2fe4d0_0(258986ec27f74dc6548ef4e76d53a207146f285f2483c89f1fa3c510a1f82e93): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.633346 4870 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs_openshift-operators_063bbc92-30f0-4cb3-9f15-8f303b2fe4d0_0(258986ec27f74dc6548ef4e76d53a207146f285f2483c89f1fa3c510a1f82e93): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.633391 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs_openshift-operators(063bbc92-30f0-4cb3-9f15-8f303b2fe4d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs_openshift-operators(063bbc92-30f0-4cb3-9f15-8f303b2fe4d0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs_openshift-operators_063bbc92-30f0-4cb3-9f15-8f303b2fe4d0_0(258986ec27f74dc6548ef4e76d53a207146f285f2483c89f1fa3c510a1f82e93): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" podUID="063bbc92-30f0-4cb3-9f15-8f303b2fe4d0" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.680605 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87m9c\" (UniqueName: \"kubernetes.io/projected/02ecd319-55c1-4189-bf38-35f08025630c-kube-api-access-87m9c\") pod \"perses-operator-5bf474d74f-pbwnb\" (UID: \"02ecd319-55c1-4189-bf38-35f08025630c\") " pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.680735 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7726c8d8-365d-4b95-9b6c-2c95c221f1f4-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7mxf6\" (UID: \"7726c8d8-365d-4b95-9b6c-2c95c221f1f4\") " pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.680785 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88lgj\" (UniqueName: \"kubernetes.io/projected/7726c8d8-365d-4b95-9b6c-2c95c221f1f4-kube-api-access-88lgj\") pod \"observability-operator-59bdc8b94-7mxf6\" (UID: \"7726c8d8-365d-4b95-9b6c-2c95c221f1f4\") " pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.680824 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/02ecd319-55c1-4189-bf38-35f08025630c-openshift-service-ca\") pod \"perses-operator-5bf474d74f-pbwnb\" (UID: \"02ecd319-55c1-4189-bf38-35f08025630c\") " pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.686899 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7726c8d8-365d-4b95-9b6c-2c95c221f1f4-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7mxf6\" (UID: \"7726c8d8-365d-4b95-9b6c-2c95c221f1f4\") " pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.706695 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88lgj\" (UniqueName: \"kubernetes.io/projected/7726c8d8-365d-4b95-9b6c-2c95c221f1f4-kube-api-access-88lgj\") pod \"observability-operator-59bdc8b94-7mxf6\" (UID: \"7726c8d8-365d-4b95-9b6c-2c95c221f1f4\") " pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.782123 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/02ecd319-55c1-4189-bf38-35f08025630c-openshift-service-ca\") pod \"perses-operator-5bf474d74f-pbwnb\" (UID: \"02ecd319-55c1-4189-bf38-35f08025630c\") " pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.782192 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87m9c\" (UniqueName: \"kubernetes.io/projected/02ecd319-55c1-4189-bf38-35f08025630c-kube-api-access-87m9c\") pod \"perses-operator-5bf474d74f-pbwnb\" (UID: \"02ecd319-55c1-4189-bf38-35f08025630c\") " pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.782937 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/02ecd319-55c1-4189-bf38-35f08025630c-openshift-service-ca\") pod \"perses-operator-5bf474d74f-pbwnb\" (UID: \"02ecd319-55c1-4189-bf38-35f08025630c\") " pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.794363 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.804038 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87m9c\" (UniqueName: \"kubernetes.io/projected/02ecd319-55c1-4189-bf38-35f08025630c-kube-api-access-87m9c\") pod \"perses-operator-5bf474d74f-pbwnb\" (UID: \"02ecd319-55c1-4189-bf38-35f08025630c\") " pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.835538 4870 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7mxf6_openshift-operators_7726c8d8-365d-4b95-9b6c-2c95c221f1f4_0(b7ccff90faf54e0aa4fe5323e90dc1e3b91951d0b10ba897b5317756c99e3344): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.835632 4870 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7mxf6_openshift-operators_7726c8d8-365d-4b95-9b6c-2c95c221f1f4_0(b7ccff90faf54e0aa4fe5323e90dc1e3b91951d0b10ba897b5317756c99e3344): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.835672 4870 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7mxf6_openshift-operators_7726c8d8-365d-4b95-9b6c-2c95c221f1f4_0(b7ccff90faf54e0aa4fe5323e90dc1e3b91951d0b10ba897b5317756c99e3344): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.835722 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-7mxf6_openshift-operators(7726c8d8-365d-4b95-9b6c-2c95c221f1f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-7mxf6_openshift-operators(7726c8d8-365d-4b95-9b6c-2c95c221f1f4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7mxf6_openshift-operators_7726c8d8-365d-4b95-9b6c-2c95c221f1f4_0(b7ccff90faf54e0aa4fe5323e90dc1e3b91951d0b10ba897b5317756c99e3344): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" podUID="7726c8d8-365d-4b95-9b6c-2c95c221f1f4" Feb 16 17:11:50 crc kubenswrapper[4870]: I0216 17:11:50.921894 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.944302 4870 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pbwnb_openshift-operators_02ecd319-55c1-4189-bf38-35f08025630c_0(dabf6c3afce2550bf6b2e8af7db7f425674dfd9e1641f6a9ccaae637c400b4d5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.944371 4870 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pbwnb_openshift-operators_02ecd319-55c1-4189-bf38-35f08025630c_0(dabf6c3afce2550bf6b2e8af7db7f425674dfd9e1641f6a9ccaae637c400b4d5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.944392 4870 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pbwnb_openshift-operators_02ecd319-55c1-4189-bf38-35f08025630c_0(dabf6c3afce2550bf6b2e8af7db7f425674dfd9e1641f6a9ccaae637c400b4d5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:11:50 crc kubenswrapper[4870]: E0216 17:11:50.944439 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-pbwnb_openshift-operators(02ecd319-55c1-4189-bf38-35f08025630c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-pbwnb_openshift-operators(02ecd319-55c1-4189-bf38-35f08025630c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pbwnb_openshift-operators_02ecd319-55c1-4189-bf38-35f08025630c_0(dabf6c3afce2550bf6b2e8af7db7f425674dfd9e1641f6a9ccaae637c400b4d5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" podUID="02ecd319-55c1-4189-bf38-35f08025630c" Feb 16 17:11:51 crc kubenswrapper[4870]: I0216 17:11:51.245686 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" event={"ID":"2f274975-9184-4cc3-b4d0-c9e11be6a300","Type":"ContainerStarted","Data":"e9b18ff2752b28d5af99da6a2bfc234cefa4253a569fd16e247b5a696e1cf0a2"} Feb 16 17:11:51 crc kubenswrapper[4870]: I0216 17:11:51.245734 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" event={"ID":"2f274975-9184-4cc3-b4d0-c9e11be6a300","Type":"ContainerStarted","Data":"b16fb69e854bfcbd70d0935da0ea4d8b86db22b9d68e7566acec8aed37768f1a"} Feb 16 17:11:51 crc kubenswrapper[4870]: I0216 17:11:51.245747 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" event={"ID":"2f274975-9184-4cc3-b4d0-c9e11be6a300","Type":"ContainerStarted","Data":"fb8aaa000d30d91eef02b81b29e538d19173297a99096d2fd9f275447f307a45"} Feb 16 17:11:51 crc kubenswrapper[4870]: I0216 17:11:51.245758 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" event={"ID":"2f274975-9184-4cc3-b4d0-c9e11be6a300","Type":"ContainerStarted","Data":"1ebf2489155f9f33c01511b1b76ecfd2b7b1ca44cd592ef8d4071559f9c8fa60"} Feb 16 17:11:51 crc kubenswrapper[4870]: I0216 17:11:51.245768 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" event={"ID":"2f274975-9184-4cc3-b4d0-c9e11be6a300","Type":"ContainerStarted","Data":"64b8d5843edebc4d46ff6dfb091c2a6b62c9e7bd34274e304e229f5d22f7b011"} Feb 16 17:11:51 crc kubenswrapper[4870]: I0216 17:11:51.245778 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" event={"ID":"2f274975-9184-4cc3-b4d0-c9e11be6a300","Type":"ContainerStarted","Data":"8ed02f474a5dea921ad5a1089855c117bbdec75032dc8b69f8cc77a3e4539de8"} Feb 16 17:11:52 crc kubenswrapper[4870]: I0216 17:11:52.229164 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="650bce90-73d6-474d-ab19-f50252dc8bc3" path="/var/lib/kubelet/pods/650bce90-73d6-474d-ab19-f50252dc8bc3/volumes" Feb 16 17:11:54 crc kubenswrapper[4870]: I0216 17:11:54.265208 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" event={"ID":"2f274975-9184-4cc3-b4d0-c9e11be6a300","Type":"ContainerStarted","Data":"aee23b09262109eaf75fd0dc02c67158ef51c21a8ef9f1544aabe50bffc3455c"} Feb 16 17:11:56 crc kubenswrapper[4870]: I0216 17:11:56.278802 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" event={"ID":"2f274975-9184-4cc3-b4d0-c9e11be6a300","Type":"ContainerStarted","Data":"7127aff7c69eeda2149e29ce6474a9f4bd5196b1dcf626f5145aea11b9711b11"} Feb 16 17:11:56 crc kubenswrapper[4870]: I0216 17:11:56.279596 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:56 crc kubenswrapper[4870]: I0216 17:11:56.279614 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:56 crc kubenswrapper[4870]: I0216 17:11:56.312553 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" podStartSLOduration=7.312531655 podStartE2EDuration="7.312531655s" podCreationTimestamp="2026-02-16 17:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:11:56.309851598 +0000 UTC m=+720.793315992" watchObservedRunningTime="2026-02-16 17:11:56.312531655 +0000 UTC m=+720.795996039" Feb 16 17:11:56 crc kubenswrapper[4870]: I0216 17:11:56.331248 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7mxf6"] Feb 16 17:11:56 crc kubenswrapper[4870]: I0216 17:11:56.331359 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:56 crc kubenswrapper[4870]: I0216 17:11:56.331473 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:11:56 crc kubenswrapper[4870]: I0216 17:11:56.331928 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:11:56 crc kubenswrapper[4870]: I0216 17:11:56.337538 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-v797k"] Feb 16 17:11:56 crc kubenswrapper[4870]: I0216 17:11:56.337681 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" Feb 16 17:11:56 crc kubenswrapper[4870]: I0216 17:11:56.338114 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" Feb 16 17:11:56 crc kubenswrapper[4870]: I0216 17:11:56.345078 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs"] Feb 16 17:11:56 crc kubenswrapper[4870]: I0216 17:11:56.345188 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" Feb 16 17:11:56 crc kubenswrapper[4870]: I0216 17:11:56.345542 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" Feb 16 17:11:56 crc kubenswrapper[4870]: I0216 17:11:56.352161 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-pbwnb"] Feb 16 17:11:56 crc kubenswrapper[4870]: I0216 17:11:56.352282 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:11:56 crc kubenswrapper[4870]: I0216 17:11:56.352704 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:11:56 crc kubenswrapper[4870]: I0216 17:11:56.365616 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x"] Feb 16 17:11:56 crc kubenswrapper[4870]: I0216 17:11:56.365730 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" Feb 16 17:11:56 crc kubenswrapper[4870]: I0216 17:11:56.366106 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" Feb 16 17:11:56 crc kubenswrapper[4870]: E0216 17:11:56.396280 4870 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-v797k_openshift-operators_d26622cb-47bc-4378-867b-abad855869a5_0(c07ee693df443cfecf5cc92353c90cf48bce2d9db4056702acb4bb2d03b0b962): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:11:56 crc kubenswrapper[4870]: E0216 17:11:56.396376 4870 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-v797k_openshift-operators_d26622cb-47bc-4378-867b-abad855869a5_0(c07ee693df443cfecf5cc92353c90cf48bce2d9db4056702acb4bb2d03b0b962): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" Feb 16 17:11:56 crc kubenswrapper[4870]: E0216 17:11:56.396401 4870 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-v797k_openshift-operators_d26622cb-47bc-4378-867b-abad855869a5_0(c07ee693df443cfecf5cc92353c90cf48bce2d9db4056702acb4bb2d03b0b962): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" Feb 16 17:11:56 crc kubenswrapper[4870]: E0216 17:11:56.396482 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-v797k_openshift-operators(d26622cb-47bc-4378-867b-abad855869a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-v797k_openshift-operators(d26622cb-47bc-4378-867b-abad855869a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-v797k_openshift-operators_d26622cb-47bc-4378-867b-abad855869a5_0(c07ee693df443cfecf5cc92353c90cf48bce2d9db4056702acb4bb2d03b0b962): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" podUID="d26622cb-47bc-4378-867b-abad855869a5" Feb 16 17:11:56 crc kubenswrapper[4870]: E0216 17:11:56.403621 4870 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7mxf6_openshift-operators_7726c8d8-365d-4b95-9b6c-2c95c221f1f4_0(9d9c2c7d9abba8eeb7e7798316beccc94c7e8eedbf55083ded6911b12c1c4421): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:11:56 crc kubenswrapper[4870]: E0216 17:11:56.403698 4870 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7mxf6_openshift-operators_7726c8d8-365d-4b95-9b6c-2c95c221f1f4_0(9d9c2c7d9abba8eeb7e7798316beccc94c7e8eedbf55083ded6911b12c1c4421): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:11:56 crc kubenswrapper[4870]: E0216 17:11:56.403736 4870 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7mxf6_openshift-operators_7726c8d8-365d-4b95-9b6c-2c95c221f1f4_0(9d9c2c7d9abba8eeb7e7798316beccc94c7e8eedbf55083ded6911b12c1c4421): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:11:56 crc kubenswrapper[4870]: E0216 17:11:56.403784 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-7mxf6_openshift-operators(7726c8d8-365d-4b95-9b6c-2c95c221f1f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-7mxf6_openshift-operators(7726c8d8-365d-4b95-9b6c-2c95c221f1f4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7mxf6_openshift-operators_7726c8d8-365d-4b95-9b6c-2c95c221f1f4_0(9d9c2c7d9abba8eeb7e7798316beccc94c7e8eedbf55083ded6911b12c1c4421): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" podUID="7726c8d8-365d-4b95-9b6c-2c95c221f1f4" Feb 16 17:11:56 crc kubenswrapper[4870]: E0216 17:11:56.420295 4870 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs_openshift-operators_063bbc92-30f0-4cb3-9f15-8f303b2fe4d0_0(83da77553e86c3a46c820ec46adcd08a2b5798cfa7f55d21dacc426c2de2cfb9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:11:56 crc kubenswrapper[4870]: E0216 17:11:56.420367 4870 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs_openshift-operators_063bbc92-30f0-4cb3-9f15-8f303b2fe4d0_0(83da77553e86c3a46c820ec46adcd08a2b5798cfa7f55d21dacc426c2de2cfb9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" Feb 16 17:11:56 crc kubenswrapper[4870]: E0216 17:11:56.420406 4870 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs_openshift-operators_063bbc92-30f0-4cb3-9f15-8f303b2fe4d0_0(83da77553e86c3a46c820ec46adcd08a2b5798cfa7f55d21dacc426c2de2cfb9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" Feb 16 17:11:56 crc kubenswrapper[4870]: E0216 17:11:56.420452 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs_openshift-operators(063bbc92-30f0-4cb3-9f15-8f303b2fe4d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs_openshift-operators(063bbc92-30f0-4cb3-9f15-8f303b2fe4d0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs_openshift-operators_063bbc92-30f0-4cb3-9f15-8f303b2fe4d0_0(83da77553e86c3a46c820ec46adcd08a2b5798cfa7f55d21dacc426c2de2cfb9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" podUID="063bbc92-30f0-4cb3-9f15-8f303b2fe4d0" Feb 16 17:11:56 crc kubenswrapper[4870]: E0216 17:11:56.430497 4870 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x_openshift-operators_10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a_0(c18d7a4d1299b27c32d3111c4a81e14d2c8d94cb9f1e4454b0364e245f28e34a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:11:56 crc kubenswrapper[4870]: E0216 17:11:56.430556 4870 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x_openshift-operators_10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a_0(c18d7a4d1299b27c32d3111c4a81e14d2c8d94cb9f1e4454b0364e245f28e34a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" Feb 16 17:11:56 crc kubenswrapper[4870]: E0216 17:11:56.430575 4870 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x_openshift-operators_10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a_0(c18d7a4d1299b27c32d3111c4a81e14d2c8d94cb9f1e4454b0364e245f28e34a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" Feb 16 17:11:56 crc kubenswrapper[4870]: E0216 17:11:56.430630 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x_openshift-operators(10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x_openshift-operators(10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x_openshift-operators_10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a_0(c18d7a4d1299b27c32d3111c4a81e14d2c8d94cb9f1e4454b0364e245f28e34a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" podUID="10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a" Feb 16 17:11:56 crc kubenswrapper[4870]: E0216 17:11:56.435441 4870 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pbwnb_openshift-operators_02ecd319-55c1-4189-bf38-35f08025630c_0(4ecaf369fa662a411fe9f36b8cf1ebd599ddc2be86be2b8b5c360e9c0bd49f67): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:11:56 crc kubenswrapper[4870]: E0216 17:11:56.435492 4870 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pbwnb_openshift-operators_02ecd319-55c1-4189-bf38-35f08025630c_0(4ecaf369fa662a411fe9f36b8cf1ebd599ddc2be86be2b8b5c360e9c0bd49f67): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:11:56 crc kubenswrapper[4870]: E0216 17:11:56.435528 4870 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pbwnb_openshift-operators_02ecd319-55c1-4189-bf38-35f08025630c_0(4ecaf369fa662a411fe9f36b8cf1ebd599ddc2be86be2b8b5c360e9c0bd49f67): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:11:56 crc kubenswrapper[4870]: E0216 17:11:56.435568 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-pbwnb_openshift-operators(02ecd319-55c1-4189-bf38-35f08025630c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-pbwnb_openshift-operators(02ecd319-55c1-4189-bf38-35f08025630c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pbwnb_openshift-operators_02ecd319-55c1-4189-bf38-35f08025630c_0(4ecaf369fa662a411fe9f36b8cf1ebd599ddc2be86be2b8b5c360e9c0bd49f67): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" podUID="02ecd319-55c1-4189-bf38-35f08025630c" Feb 16 17:11:56 crc kubenswrapper[4870]: I0216 17:11:56.578128 4870 scope.go:117] "RemoveContainer" containerID="518e743bc624ef2f44e10a703aeadd752f57757b2b1699a7880c9361671852ec" Feb 16 17:11:57 crc kubenswrapper[4870]: I0216 17:11:57.283449 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:11:57 crc kubenswrapper[4870]: I0216 17:11:57.310539 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:12:01 crc kubenswrapper[4870]: I0216 17:12:01.222901 4870 scope.go:117] "RemoveContainer" containerID="b0a335f8947cdf12560eade87cdde71ba410a4fb308365d680fba7d66dfa88b5" Feb 16 17:12:01 crc kubenswrapper[4870]: E0216 17:12:01.223407 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-jjq54_openshift-multus(52f144f1-d0b6-4871-a439-6aaf51304c4b)\"" pod="openshift-multus/multus-jjq54" podUID="52f144f1-d0b6-4871-a439-6aaf51304c4b" Feb 16 17:12:05 crc kubenswrapper[4870]: I0216 17:12:05.367434 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:12:05 crc kubenswrapper[4870]: I0216 17:12:05.367766 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:12:07 crc kubenswrapper[4870]: I0216 17:12:07.222872 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" Feb 16 17:12:07 crc kubenswrapper[4870]: I0216 17:12:07.223628 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" Feb 16 17:12:07 crc kubenswrapper[4870]: E0216 17:12:07.249507 4870 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs_openshift-operators_063bbc92-30f0-4cb3-9f15-8f303b2fe4d0_0(36e7c69067728be60fa8c5efd70ea919ca6d1168e009e5f601ef6f34cc22ab97): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:12:07 crc kubenswrapper[4870]: E0216 17:12:07.249565 4870 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs_openshift-operators_063bbc92-30f0-4cb3-9f15-8f303b2fe4d0_0(36e7c69067728be60fa8c5efd70ea919ca6d1168e009e5f601ef6f34cc22ab97): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" Feb 16 17:12:07 crc kubenswrapper[4870]: E0216 17:12:07.249584 4870 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs_openshift-operators_063bbc92-30f0-4cb3-9f15-8f303b2fe4d0_0(36e7c69067728be60fa8c5efd70ea919ca6d1168e009e5f601ef6f34cc22ab97): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" Feb 16 17:12:07 crc kubenswrapper[4870]: E0216 17:12:07.249629 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs_openshift-operators(063bbc92-30f0-4cb3-9f15-8f303b2fe4d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs_openshift-operators(063bbc92-30f0-4cb3-9f15-8f303b2fe4d0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs_openshift-operators_063bbc92-30f0-4cb3-9f15-8f303b2fe4d0_0(36e7c69067728be60fa8c5efd70ea919ca6d1168e009e5f601ef6f34cc22ab97): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" podUID="063bbc92-30f0-4cb3-9f15-8f303b2fe4d0" Feb 16 17:12:08 crc kubenswrapper[4870]: I0216 17:12:08.222618 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:12:08 crc kubenswrapper[4870]: I0216 17:12:08.223059 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:12:08 crc kubenswrapper[4870]: E0216 17:12:08.244541 4870 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7mxf6_openshift-operators_7726c8d8-365d-4b95-9b6c-2c95c221f1f4_0(fd55d98c17e5f979a1894ab424d9d6c94ea6ca0779e09034b43c2c9b142045fc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:12:08 crc kubenswrapper[4870]: E0216 17:12:08.244789 4870 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7mxf6_openshift-operators_7726c8d8-365d-4b95-9b6c-2c95c221f1f4_0(fd55d98c17e5f979a1894ab424d9d6c94ea6ca0779e09034b43c2c9b142045fc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:12:08 crc kubenswrapper[4870]: E0216 17:12:08.244808 4870 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7mxf6_openshift-operators_7726c8d8-365d-4b95-9b6c-2c95c221f1f4_0(fd55d98c17e5f979a1894ab424d9d6c94ea6ca0779e09034b43c2c9b142045fc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:12:08 crc kubenswrapper[4870]: E0216 17:12:08.244854 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-7mxf6_openshift-operators(7726c8d8-365d-4b95-9b6c-2c95c221f1f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-7mxf6_openshift-operators(7726c8d8-365d-4b95-9b6c-2c95c221f1f4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7mxf6_openshift-operators_7726c8d8-365d-4b95-9b6c-2c95c221f1f4_0(fd55d98c17e5f979a1894ab424d9d6c94ea6ca0779e09034b43c2c9b142045fc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" podUID="7726c8d8-365d-4b95-9b6c-2c95c221f1f4" Feb 16 17:12:09 crc kubenswrapper[4870]: I0216 17:12:09.222474 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" Feb 16 17:12:09 crc kubenswrapper[4870]: I0216 17:12:09.223399 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" Feb 16 17:12:09 crc kubenswrapper[4870]: E0216 17:12:09.243223 4870 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-v797k_openshift-operators_d26622cb-47bc-4378-867b-abad855869a5_0(acb7c82b961d8ac6317e6e8b06196f032eb82da64bfc57ae7bc60e7889b0bedb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:12:09 crc kubenswrapper[4870]: E0216 17:12:09.243352 4870 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-v797k_openshift-operators_d26622cb-47bc-4378-867b-abad855869a5_0(acb7c82b961d8ac6317e6e8b06196f032eb82da64bfc57ae7bc60e7889b0bedb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" Feb 16 17:12:09 crc kubenswrapper[4870]: E0216 17:12:09.243429 4870 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-v797k_openshift-operators_d26622cb-47bc-4378-867b-abad855869a5_0(acb7c82b961d8ac6317e6e8b06196f032eb82da64bfc57ae7bc60e7889b0bedb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" Feb 16 17:12:09 crc kubenswrapper[4870]: E0216 17:12:09.243539 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-v797k_openshift-operators(d26622cb-47bc-4378-867b-abad855869a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-v797k_openshift-operators(d26622cb-47bc-4378-867b-abad855869a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-v797k_openshift-operators_d26622cb-47bc-4378-867b-abad855869a5_0(acb7c82b961d8ac6317e6e8b06196f032eb82da64bfc57ae7bc60e7889b0bedb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" podUID="d26622cb-47bc-4378-867b-abad855869a5" Feb 16 17:12:11 crc kubenswrapper[4870]: I0216 17:12:11.222158 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" Feb 16 17:12:11 crc kubenswrapper[4870]: I0216 17:12:11.222164 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:12:11 crc kubenswrapper[4870]: I0216 17:12:11.222938 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" Feb 16 17:12:11 crc kubenswrapper[4870]: I0216 17:12:11.223159 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:12:11 crc kubenswrapper[4870]: E0216 17:12:11.265122 4870 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pbwnb_openshift-operators_02ecd319-55c1-4189-bf38-35f08025630c_0(b465883bc211fe1c4f6a21a8d5f756ce4c8871a2024ee06c268c84facf68fdf8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:12:11 crc kubenswrapper[4870]: E0216 17:12:11.265212 4870 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pbwnb_openshift-operators_02ecd319-55c1-4189-bf38-35f08025630c_0(b465883bc211fe1c4f6a21a8d5f756ce4c8871a2024ee06c268c84facf68fdf8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:12:11 crc kubenswrapper[4870]: E0216 17:12:11.265242 4870 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pbwnb_openshift-operators_02ecd319-55c1-4189-bf38-35f08025630c_0(b465883bc211fe1c4f6a21a8d5f756ce4c8871a2024ee06c268c84facf68fdf8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:12:11 crc kubenswrapper[4870]: E0216 17:12:11.265301 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-pbwnb_openshift-operators(02ecd319-55c1-4189-bf38-35f08025630c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-pbwnb_openshift-operators(02ecd319-55c1-4189-bf38-35f08025630c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pbwnb_openshift-operators_02ecd319-55c1-4189-bf38-35f08025630c_0(b465883bc211fe1c4f6a21a8d5f756ce4c8871a2024ee06c268c84facf68fdf8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" podUID="02ecd319-55c1-4189-bf38-35f08025630c" Feb 16 17:12:11 crc kubenswrapper[4870]: E0216 17:12:11.271082 4870 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x_openshift-operators_10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a_0(b5e2690df47438c5201dea0fed593b6d8e8345ab2618c84953e11c054ef14209): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:12:11 crc kubenswrapper[4870]: E0216 17:12:11.271198 4870 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x_openshift-operators_10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a_0(b5e2690df47438c5201dea0fed593b6d8e8345ab2618c84953e11c054ef14209): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" Feb 16 17:12:11 crc kubenswrapper[4870]: E0216 17:12:11.271239 4870 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x_openshift-operators_10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a_0(b5e2690df47438c5201dea0fed593b6d8e8345ab2618c84953e11c054ef14209): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" Feb 16 17:12:11 crc kubenswrapper[4870]: E0216 17:12:11.271317 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x_openshift-operators(10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x_openshift-operators(10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x_openshift-operators_10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a_0(b5e2690df47438c5201dea0fed593b6d8e8345ab2618c84953e11c054ef14209): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" podUID="10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a" Feb 16 17:12:14 crc kubenswrapper[4870]: I0216 17:12:14.222412 4870 scope.go:117] "RemoveContainer" containerID="b0a335f8947cdf12560eade87cdde71ba410a4fb308365d680fba7d66dfa88b5" Feb 16 17:12:14 crc kubenswrapper[4870]: I0216 17:12:14.375100 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jjq54_52f144f1-d0b6-4871-a439-6aaf51304c4b/kube-multus/2.log" Feb 16 17:12:15 crc kubenswrapper[4870]: I0216 17:12:15.382844 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jjq54_52f144f1-d0b6-4871-a439-6aaf51304c4b/kube-multus/2.log" Feb 16 17:12:15 crc kubenswrapper[4870]: I0216 17:12:15.383184 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jjq54" event={"ID":"52f144f1-d0b6-4871-a439-6aaf51304c4b","Type":"ContainerStarted","Data":"030819a68928cce2a3f66667c3868850bcf518cf5685ec975606ffba0d2667b9"} Feb 16 17:12:19 crc kubenswrapper[4870]: I0216 17:12:19.668090 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cxq6t" Feb 16 17:12:20 crc kubenswrapper[4870]: I0216 17:12:20.222930 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:12:20 crc kubenswrapper[4870]: I0216 17:12:20.223405 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:12:20 crc kubenswrapper[4870]: I0216 17:12:20.420278 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7mxf6"] Feb 16 17:12:20 crc kubenswrapper[4870]: W0216 17:12:20.427121 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7726c8d8_365d_4b95_9b6c_2c95c221f1f4.slice/crio-0ab5701b5fd257725866a4d6ebc1e218598475a05f89dd9d4ed093db62e7f3e2 WatchSource:0}: Error finding container 0ab5701b5fd257725866a4d6ebc1e218598475a05f89dd9d4ed093db62e7f3e2: Status 404 returned error can't find the container with id 0ab5701b5fd257725866a4d6ebc1e218598475a05f89dd9d4ed093db62e7f3e2 Feb 16 17:12:21 crc kubenswrapper[4870]: I0216 17:12:21.229179 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" Feb 16 17:12:21 crc kubenswrapper[4870]: I0216 17:12:21.230023 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" Feb 16 17:12:21 crc kubenswrapper[4870]: I0216 17:12:21.416298 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" event={"ID":"7726c8d8-365d-4b95-9b6c-2c95c221f1f4","Type":"ContainerStarted","Data":"0ab5701b5fd257725866a4d6ebc1e218598475a05f89dd9d4ed093db62e7f3e2"} Feb 16 17:12:21 crc kubenswrapper[4870]: I0216 17:12:21.461966 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs"] Feb 16 17:12:21 crc kubenswrapper[4870]: W0216 17:12:21.482513 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod063bbc92_30f0_4cb3_9f15_8f303b2fe4d0.slice/crio-176bb3b5c85a5fd5e39f9c791b84f9a40f51866a29c6851e09fce71fb35b97f4 WatchSource:0}: Error finding container 176bb3b5c85a5fd5e39f9c791b84f9a40f51866a29c6851e09fce71fb35b97f4: Status 404 returned error can't find the container with id 176bb3b5c85a5fd5e39f9c791b84f9a40f51866a29c6851e09fce71fb35b97f4 Feb 16 17:12:22 crc kubenswrapper[4870]: I0216 17:12:22.222725 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:12:22 crc kubenswrapper[4870]: I0216 17:12:22.223409 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:12:22 crc kubenswrapper[4870]: I0216 17:12:22.427674 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" event={"ID":"063bbc92-30f0-4cb3-9f15-8f303b2fe4d0","Type":"ContainerStarted","Data":"176bb3b5c85a5fd5e39f9c791b84f9a40f51866a29c6851e09fce71fb35b97f4"} Feb 16 17:12:22 crc kubenswrapper[4870]: I0216 17:12:22.430033 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-pbwnb"] Feb 16 17:12:24 crc kubenswrapper[4870]: I0216 17:12:24.222500 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" Feb 16 17:12:24 crc kubenswrapper[4870]: I0216 17:12:24.222883 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" Feb 16 17:12:26 crc kubenswrapper[4870]: I0216 17:12:26.222798 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" Feb 16 17:12:26 crc kubenswrapper[4870]: I0216 17:12:26.228074 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" Feb 16 17:12:26 crc kubenswrapper[4870]: W0216 17:12:26.234608 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02ecd319_55c1_4189_bf38_35f08025630c.slice/crio-f9b4d05ad3c50df466fa747d8d655dc2a7e5b0259e07765b14b6d349b57bb9b0 WatchSource:0}: Error finding container f9b4d05ad3c50df466fa747d8d655dc2a7e5b0259e07765b14b6d349b57bb9b0: Status 404 returned error can't find the container with id f9b4d05ad3c50df466fa747d8d655dc2a7e5b0259e07765b14b6d349b57bb9b0 Feb 16 17:12:26 crc kubenswrapper[4870]: I0216 17:12:26.457251 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" event={"ID":"02ecd319-55c1-4189-bf38-35f08025630c","Type":"ContainerStarted","Data":"f9b4d05ad3c50df466fa747d8d655dc2a7e5b0259e07765b14b6d349b57bb9b0"} Feb 16 17:12:27 crc kubenswrapper[4870]: I0216 17:12:27.295304 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x"] Feb 16 17:12:27 crc kubenswrapper[4870]: I0216 17:12:27.337109 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-v797k"] Feb 16 17:12:27 crc kubenswrapper[4870]: W0216 17:12:27.343240 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd26622cb_47bc_4378_867b_abad855869a5.slice/crio-da53e62b9cf14eb1b4e3b5d565c1343597e8ece8bd71701adc78c4a45a41c146 WatchSource:0}: Error finding container da53e62b9cf14eb1b4e3b5d565c1343597e8ece8bd71701adc78c4a45a41c146: Status 404 returned error can't find the container with id da53e62b9cf14eb1b4e3b5d565c1343597e8ece8bd71701adc78c4a45a41c146 Feb 16 17:12:27 crc kubenswrapper[4870]: I0216 17:12:27.465025 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" event={"ID":"d26622cb-47bc-4378-867b-abad855869a5","Type":"ContainerStarted","Data":"da53e62b9cf14eb1b4e3b5d565c1343597e8ece8bd71701adc78c4a45a41c146"} Feb 16 17:12:27 crc kubenswrapper[4870]: I0216 17:12:27.466245 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" event={"ID":"10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a","Type":"ContainerStarted","Data":"87a367cf91085db18f46e1c6e853a721fea3fef03909abcbec66fba88eb3ebae"} Feb 16 17:12:28 crc kubenswrapper[4870]: I0216 17:12:28.474387 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" event={"ID":"7726c8d8-365d-4b95-9b6c-2c95c221f1f4","Type":"ContainerStarted","Data":"e0cd134adcac2dcd8a9b897d36998cd62f78c72db40e69adca8a0e200de1ec2d"} Feb 16 17:12:28 crc kubenswrapper[4870]: I0216 17:12:28.474646 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:12:28 crc kubenswrapper[4870]: I0216 17:12:28.476001 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" event={"ID":"10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a","Type":"ContainerStarted","Data":"324d8434dbf846eb93ccbe3f4211c805b1ae8a2b3d85d3897ebc2663208bae97"} Feb 16 17:12:28 crc kubenswrapper[4870]: I0216 17:12:28.477740 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" event={"ID":"063bbc92-30f0-4cb3-9f15-8f303b2fe4d0","Type":"ContainerStarted","Data":"22b573137de53f675aa3c0c849cc4454d2b34842a700c7ffdd4ec0b7e67a2106"} Feb 16 17:12:28 crc kubenswrapper[4870]: I0216 17:12:28.479600 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" Feb 16 17:12:28 crc kubenswrapper[4870]: I0216 17:12:28.499150 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-7mxf6" podStartSLOduration=31.604167609 podStartE2EDuration="38.499076872s" podCreationTimestamp="2026-02-16 17:11:50 +0000 UTC" firstStartedPulling="2026-02-16 17:12:20.431906556 +0000 UTC m=+744.915370950" lastFinishedPulling="2026-02-16 17:12:27.326815829 +0000 UTC m=+751.810280213" observedRunningTime="2026-02-16 17:12:28.496254621 +0000 UTC m=+752.979719005" watchObservedRunningTime="2026-02-16 17:12:28.499076872 +0000 UTC m=+752.982541256" Feb 16 17:12:28 crc kubenswrapper[4870]: I0216 17:12:28.525206 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs" podStartSLOduration=32.666438455 podStartE2EDuration="38.525183323s" podCreationTimestamp="2026-02-16 17:11:50 +0000 UTC" firstStartedPulling="2026-02-16 17:12:21.491474755 +0000 UTC m=+745.974939139" lastFinishedPulling="2026-02-16 17:12:27.350219623 +0000 UTC m=+751.833684007" observedRunningTime="2026-02-16 17:12:28.520982163 +0000 UTC m=+753.004446567" watchObservedRunningTime="2026-02-16 17:12:28.525183323 +0000 UTC m=+753.008647707" Feb 16 17:12:28 crc kubenswrapper[4870]: I0216 17:12:28.590458 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x" podStartSLOduration=38.590440252 podStartE2EDuration="38.590440252s" podCreationTimestamp="2026-02-16 17:11:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:12:28.585036706 +0000 UTC m=+753.068501090" watchObservedRunningTime="2026-02-16 17:12:28.590440252 +0000 UTC m=+753.073904636" Feb 16 17:12:30 crc kubenswrapper[4870]: I0216 17:12:30.495825 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" event={"ID":"02ecd319-55c1-4189-bf38-35f08025630c","Type":"ContainerStarted","Data":"ffb096e3cb8a0af95310787ac6025619a118b315b13b5d86eb6aec75cc81720b"} Feb 16 17:12:30 crc kubenswrapper[4870]: I0216 17:12:30.496204 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:12:31 crc kubenswrapper[4870]: I0216 17:12:31.503052 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" event={"ID":"d26622cb-47bc-4378-867b-abad855869a5","Type":"ContainerStarted","Data":"8ccc8d0e003d3335590c8c725aa51e0fae190009f10b011779a5819260322bbb"} Feb 16 17:12:31 crc kubenswrapper[4870]: I0216 17:12:31.521298 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v797k" podStartSLOduration=38.637770273 podStartE2EDuration="41.521283653s" podCreationTimestamp="2026-02-16 17:11:50 +0000 UTC" firstStartedPulling="2026-02-16 17:12:27.34976 +0000 UTC m=+751.833224384" lastFinishedPulling="2026-02-16 17:12:30.23327339 +0000 UTC m=+754.716737764" observedRunningTime="2026-02-16 17:12:31.516928638 +0000 UTC m=+756.000393022" watchObservedRunningTime="2026-02-16 17:12:31.521283653 +0000 UTC m=+756.004748037" Feb 16 17:12:31 crc kubenswrapper[4870]: I0216 17:12:31.523015 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" podStartSLOduration=37.541651944 podStartE2EDuration="41.523008163s" podCreationTimestamp="2026-02-16 17:11:50 +0000 UTC" firstStartedPulling="2026-02-16 17:12:26.237350001 +0000 UTC m=+750.720814375" lastFinishedPulling="2026-02-16 17:12:30.21870621 +0000 UTC m=+754.702170594" observedRunningTime="2026-02-16 17:12:30.522246257 +0000 UTC m=+755.005710641" watchObservedRunningTime="2026-02-16 17:12:31.523008163 +0000 UTC m=+756.006472537" Feb 16 17:12:35 crc kubenswrapper[4870]: I0216 17:12:35.366497 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:12:35 crc kubenswrapper[4870]: I0216 17:12:35.366840 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:12:35 crc kubenswrapper[4870]: I0216 17:12:35.366884 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:12:35 crc kubenswrapper[4870]: I0216 17:12:35.367565 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"02e7dc6801b04294cf296b6adc3615ad5492b082048d6e70fbe5ab1eef8f5cb4"} pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:12:35 crc kubenswrapper[4870]: I0216 17:12:35.367630 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" containerID="cri-o://02e7dc6801b04294cf296b6adc3615ad5492b082048d6e70fbe5ab1eef8f5cb4" gracePeriod=600 Feb 16 17:12:35 crc kubenswrapper[4870]: I0216 17:12:35.528127 4870 generic.go:334] "Generic (PLEG): container finished" podID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerID="02e7dc6801b04294cf296b6adc3615ad5492b082048d6e70fbe5ab1eef8f5cb4" exitCode=0 Feb 16 17:12:35 crc kubenswrapper[4870]: I0216 17:12:35.528185 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerDied","Data":"02e7dc6801b04294cf296b6adc3615ad5492b082048d6e70fbe5ab1eef8f5cb4"} Feb 16 17:12:35 crc kubenswrapper[4870]: I0216 17:12:35.528442 4870 scope.go:117] "RemoveContainer" containerID="d7ca0ad42a015d4e082134ca747039ba8a51f27cc8bcf372698a3dfdcb0045da" Feb 16 17:12:36 crc kubenswrapper[4870]: I0216 17:12:36.535442 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerStarted","Data":"ae9b5f8dd0e4675f99af74251a96ffd60d2f653f4d32feb06324bf4aaba5fef5"} Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.058664 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7dmcz"] Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.059507 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7dmcz" Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.060839 4870 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-wj82g" Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.061344 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.063462 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.068001 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7dmcz"] Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.073046 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-jmgm8"] Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.073712 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jmgm8" Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.094535 4870 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-zg67f" Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.097625 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-jmgm8"] Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.104045 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-n9xgs"] Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.104721 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-n9xgs" Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.106637 4870 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-6fzfp" Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.121537 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-n9xgs"] Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.141716 4870 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.199441 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66fkz\" (UniqueName: \"kubernetes.io/projected/762fd866-f1ec-485f-a6c8-5230b0806f2c-kube-api-access-66fkz\") pod \"cert-manager-cainjector-cf98fcc89-7dmcz\" (UID: \"762fd866-f1ec-485f-a6c8-5230b0806f2c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7dmcz" Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.199489 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lv5v\" (UniqueName: \"kubernetes.io/projected/37b05018-476e-4fc8-9f96-2d6ed226a0fa-kube-api-access-5lv5v\") pod \"cert-manager-858654f9db-jmgm8\" (UID: \"37b05018-476e-4fc8-9f96-2d6ed226a0fa\") " pod="cert-manager/cert-manager-858654f9db-jmgm8" Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.199522 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5gkk\" (UniqueName: \"kubernetes.io/projected/541762a5-2ac3-47b8-84a7-4bab2757e90a-kube-api-access-n5gkk\") pod \"cert-manager-webhook-687f57d79b-n9xgs\" (UID: \"541762a5-2ac3-47b8-84a7-4bab2757e90a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-n9xgs" Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.300938 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lv5v\" (UniqueName: \"kubernetes.io/projected/37b05018-476e-4fc8-9f96-2d6ed226a0fa-kube-api-access-5lv5v\") pod \"cert-manager-858654f9db-jmgm8\" (UID: \"37b05018-476e-4fc8-9f96-2d6ed226a0fa\") " pod="cert-manager/cert-manager-858654f9db-jmgm8" Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.301006 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5gkk\" (UniqueName: \"kubernetes.io/projected/541762a5-2ac3-47b8-84a7-4bab2757e90a-kube-api-access-n5gkk\") pod \"cert-manager-webhook-687f57d79b-n9xgs\" (UID: \"541762a5-2ac3-47b8-84a7-4bab2757e90a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-n9xgs" Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.301099 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66fkz\" (UniqueName: \"kubernetes.io/projected/762fd866-f1ec-485f-a6c8-5230b0806f2c-kube-api-access-66fkz\") pod \"cert-manager-cainjector-cf98fcc89-7dmcz\" (UID: \"762fd866-f1ec-485f-a6c8-5230b0806f2c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7dmcz" Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.320073 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5gkk\" (UniqueName: \"kubernetes.io/projected/541762a5-2ac3-47b8-84a7-4bab2757e90a-kube-api-access-n5gkk\") pod \"cert-manager-webhook-687f57d79b-n9xgs\" (UID: \"541762a5-2ac3-47b8-84a7-4bab2757e90a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-n9xgs" Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.321752 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66fkz\" (UniqueName: \"kubernetes.io/projected/762fd866-f1ec-485f-a6c8-5230b0806f2c-kube-api-access-66fkz\") pod \"cert-manager-cainjector-cf98fcc89-7dmcz\" (UID: \"762fd866-f1ec-485f-a6c8-5230b0806f2c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7dmcz" Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.330167 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lv5v\" (UniqueName: \"kubernetes.io/projected/37b05018-476e-4fc8-9f96-2d6ed226a0fa-kube-api-access-5lv5v\") pod \"cert-manager-858654f9db-jmgm8\" (UID: \"37b05018-476e-4fc8-9f96-2d6ed226a0fa\") " pod="cert-manager/cert-manager-858654f9db-jmgm8" Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.377321 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7dmcz" Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.402967 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-jmgm8" Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.418807 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-n9xgs" Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.644701 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7dmcz"] Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.814556 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-jmgm8"] Feb 16 17:12:37 crc kubenswrapper[4870]: W0216 17:12:37.818509 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37b05018_476e_4fc8_9f96_2d6ed226a0fa.slice/crio-69c167ccd6cb0be7ddf49f287edaaf94d51cea6d9c25530b5262df768699b62b WatchSource:0}: Error finding container 69c167ccd6cb0be7ddf49f287edaaf94d51cea6d9c25530b5262df768699b62b: Status 404 returned error can't find the container with id 69c167ccd6cb0be7ddf49f287edaaf94d51cea6d9c25530b5262df768699b62b Feb 16 17:12:37 crc kubenswrapper[4870]: I0216 17:12:37.902217 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-n9xgs"] Feb 16 17:12:37 crc kubenswrapper[4870]: W0216 17:12:37.903938 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod541762a5_2ac3_47b8_84a7_4bab2757e90a.slice/crio-07e8347b77b92490c1f4bf1bcce630bbe2cfb86dc8c2854d734799f2c7797923 WatchSource:0}: Error finding container 07e8347b77b92490c1f4bf1bcce630bbe2cfb86dc8c2854d734799f2c7797923: Status 404 returned error can't find the container with id 07e8347b77b92490c1f4bf1bcce630bbe2cfb86dc8c2854d734799f2c7797923 Feb 16 17:12:38 crc kubenswrapper[4870]: I0216 17:12:38.554481 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-n9xgs" event={"ID":"541762a5-2ac3-47b8-84a7-4bab2757e90a","Type":"ContainerStarted","Data":"07e8347b77b92490c1f4bf1bcce630bbe2cfb86dc8c2854d734799f2c7797923"} Feb 16 17:12:38 crc kubenswrapper[4870]: I0216 17:12:38.556006 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7dmcz" event={"ID":"762fd866-f1ec-485f-a6c8-5230b0806f2c","Type":"ContainerStarted","Data":"d30d1d4bba9c1c6a980a0edb3bb245d749665a58714ba805d6a0c96b2566e827"} Feb 16 17:12:38 crc kubenswrapper[4870]: I0216 17:12:38.557302 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-jmgm8" event={"ID":"37b05018-476e-4fc8-9f96-2d6ed226a0fa","Type":"ContainerStarted","Data":"69c167ccd6cb0be7ddf49f287edaaf94d51cea6d9c25530b5262df768699b62b"} Feb 16 17:12:40 crc kubenswrapper[4870]: I0216 17:12:40.935199 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-pbwnb" Feb 16 17:12:42 crc kubenswrapper[4870]: I0216 17:12:42.609645 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7dmcz" event={"ID":"762fd866-f1ec-485f-a6c8-5230b0806f2c","Type":"ContainerStarted","Data":"7e366400163935f71fa4cee551aad581c75c102e3f80436038aa3e93dcac3072"} Feb 16 17:12:42 crc kubenswrapper[4870]: I0216 17:12:42.615166 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-jmgm8" event={"ID":"37b05018-476e-4fc8-9f96-2d6ed226a0fa","Type":"ContainerStarted","Data":"8984c14a283fe4922c92d50cc272de58f4c7748bf41810ee3e88b58d68b4e573"} Feb 16 17:12:42 crc kubenswrapper[4870]: I0216 17:12:42.617588 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-n9xgs" event={"ID":"541762a5-2ac3-47b8-84a7-4bab2757e90a","Type":"ContainerStarted","Data":"4921433fb1401c49af2421c4a402d30ec6fa31f8373c1555dc0a71ac6d517868"} Feb 16 17:12:42 crc kubenswrapper[4870]: I0216 17:12:42.618207 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-n9xgs" Feb 16 17:12:42 crc kubenswrapper[4870]: I0216 17:12:42.638600 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7dmcz" podStartSLOduration=1.170477985 podStartE2EDuration="5.638582404s" podCreationTimestamp="2026-02-16 17:12:37 +0000 UTC" firstStartedPulling="2026-02-16 17:12:37.657986742 +0000 UTC m=+762.141451126" lastFinishedPulling="2026-02-16 17:12:42.126091161 +0000 UTC m=+766.609555545" observedRunningTime="2026-02-16 17:12:42.637223885 +0000 UTC m=+767.120688279" watchObservedRunningTime="2026-02-16 17:12:42.638582404 +0000 UTC m=+767.122046798" Feb 16 17:12:42 crc kubenswrapper[4870]: I0216 17:12:42.717441 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-n9xgs" podStartSLOduration=1.496467988 podStartE2EDuration="5.717418013s" podCreationTimestamp="2026-02-16 17:12:37 +0000 UTC" firstStartedPulling="2026-02-16 17:12:37.906192747 +0000 UTC m=+762.389657131" lastFinishedPulling="2026-02-16 17:12:42.127142772 +0000 UTC m=+766.610607156" observedRunningTime="2026-02-16 17:12:42.711332208 +0000 UTC m=+767.194796602" watchObservedRunningTime="2026-02-16 17:12:42.717418013 +0000 UTC m=+767.200882397" Feb 16 17:12:42 crc kubenswrapper[4870]: I0216 17:12:42.739896 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-jmgm8" podStartSLOduration=1.366718023 podStartE2EDuration="5.739869829s" podCreationTimestamp="2026-02-16 17:12:37 +0000 UTC" firstStartedPulling="2026-02-16 17:12:37.820106149 +0000 UTC m=+762.303570533" lastFinishedPulling="2026-02-16 17:12:42.193257955 +0000 UTC m=+766.676722339" observedRunningTime="2026-02-16 17:12:42.736701778 +0000 UTC m=+767.220166172" watchObservedRunningTime="2026-02-16 17:12:42.739869829 +0000 UTC m=+767.223334233" Feb 16 17:12:47 crc kubenswrapper[4870]: I0216 17:12:47.421926 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-n9xgs" Feb 16 17:13:10 crc kubenswrapper[4870]: I0216 17:13:10.674705 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm"] Feb 16 17:13:10 crc kubenswrapper[4870]: I0216 17:13:10.677004 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm" Feb 16 17:13:10 crc kubenswrapper[4870]: I0216 17:13:10.680136 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 17:13:10 crc kubenswrapper[4870]: I0216 17:13:10.681466 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm"] Feb 16 17:13:10 crc kubenswrapper[4870]: I0216 17:13:10.735849 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv8lf\" (UniqueName: \"kubernetes.io/projected/56929b47-8ff7-4aed-83a9-781ca5cf1c4a-kube-api-access-dv8lf\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm\" (UID: \"56929b47-8ff7-4aed-83a9-781ca5cf1c4a\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm" Feb 16 17:13:10 crc kubenswrapper[4870]: I0216 17:13:10.735993 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/56929b47-8ff7-4aed-83a9-781ca5cf1c4a-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm\" (UID: \"56929b47-8ff7-4aed-83a9-781ca5cf1c4a\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm" Feb 16 17:13:10 crc kubenswrapper[4870]: I0216 17:13:10.736075 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/56929b47-8ff7-4aed-83a9-781ca5cf1c4a-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm\" (UID: \"56929b47-8ff7-4aed-83a9-781ca5cf1c4a\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm" Feb 16 17:13:10 crc kubenswrapper[4870]: I0216 17:13:10.837122 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/56929b47-8ff7-4aed-83a9-781ca5cf1c4a-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm\" (UID: \"56929b47-8ff7-4aed-83a9-781ca5cf1c4a\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm" Feb 16 17:13:10 crc kubenswrapper[4870]: I0216 17:13:10.837195 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv8lf\" (UniqueName: \"kubernetes.io/projected/56929b47-8ff7-4aed-83a9-781ca5cf1c4a-kube-api-access-dv8lf\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm\" (UID: \"56929b47-8ff7-4aed-83a9-781ca5cf1c4a\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm" Feb 16 17:13:10 crc kubenswrapper[4870]: I0216 17:13:10.837232 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/56929b47-8ff7-4aed-83a9-781ca5cf1c4a-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm\" (UID: \"56929b47-8ff7-4aed-83a9-781ca5cf1c4a\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm" Feb 16 17:13:10 crc kubenswrapper[4870]: I0216 17:13:10.837903 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/56929b47-8ff7-4aed-83a9-781ca5cf1c4a-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm\" (UID: \"56929b47-8ff7-4aed-83a9-781ca5cf1c4a\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm" Feb 16 17:13:10 crc kubenswrapper[4870]: I0216 17:13:10.837921 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/56929b47-8ff7-4aed-83a9-781ca5cf1c4a-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm\" (UID: \"56929b47-8ff7-4aed-83a9-781ca5cf1c4a\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm" Feb 16 17:13:10 crc kubenswrapper[4870]: I0216 17:13:10.855165 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv8lf\" (UniqueName: \"kubernetes.io/projected/56929b47-8ff7-4aed-83a9-781ca5cf1c4a-kube-api-access-dv8lf\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm\" (UID: \"56929b47-8ff7-4aed-83a9-781ca5cf1c4a\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm" Feb 16 17:13:10 crc kubenswrapper[4870]: I0216 17:13:10.997127 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm" Feb 16 17:13:11 crc kubenswrapper[4870]: I0216 17:13:11.252897 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm"] Feb 16 17:13:11 crc kubenswrapper[4870]: I0216 17:13:11.803936 4870 generic.go:334] "Generic (PLEG): container finished" podID="56929b47-8ff7-4aed-83a9-781ca5cf1c4a" containerID="c72ba756e5b5c2ab5011b8acda6764c789b089b2b744245f883e9d2f1b9b43f3" exitCode=0 Feb 16 17:13:11 crc kubenswrapper[4870]: I0216 17:13:11.804003 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm" event={"ID":"56929b47-8ff7-4aed-83a9-781ca5cf1c4a","Type":"ContainerDied","Data":"c72ba756e5b5c2ab5011b8acda6764c789b089b2b744245f883e9d2f1b9b43f3"} Feb 16 17:13:11 crc kubenswrapper[4870]: I0216 17:13:11.804034 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm" event={"ID":"56929b47-8ff7-4aed-83a9-781ca5cf1c4a","Type":"ContainerStarted","Data":"bedfc1b7b56a12d602309613479c7c0abbee9f54aa47fdce2949185d86a8bc5e"} Feb 16 17:13:13 crc kubenswrapper[4870]: I0216 17:13:13.031995 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fq78l"] Feb 16 17:13:13 crc kubenswrapper[4870]: I0216 17:13:13.033729 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fq78l" Feb 16 17:13:13 crc kubenswrapper[4870]: I0216 17:13:13.048646 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fq78l"] Feb 16 17:13:13 crc kubenswrapper[4870]: I0216 17:13:13.099146 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/caf40079-ce5b-448b-9167-3add7d8c7881-catalog-content\") pod \"redhat-operators-fq78l\" (UID: \"caf40079-ce5b-448b-9167-3add7d8c7881\") " pod="openshift-marketplace/redhat-operators-fq78l" Feb 16 17:13:13 crc kubenswrapper[4870]: I0216 17:13:13.099269 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8qjg\" (UniqueName: \"kubernetes.io/projected/caf40079-ce5b-448b-9167-3add7d8c7881-kube-api-access-x8qjg\") pod \"redhat-operators-fq78l\" (UID: \"caf40079-ce5b-448b-9167-3add7d8c7881\") " pod="openshift-marketplace/redhat-operators-fq78l" Feb 16 17:13:13 crc kubenswrapper[4870]: I0216 17:13:13.099301 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/caf40079-ce5b-448b-9167-3add7d8c7881-utilities\") pod \"redhat-operators-fq78l\" (UID: \"caf40079-ce5b-448b-9167-3add7d8c7881\") " pod="openshift-marketplace/redhat-operators-fq78l" Feb 16 17:13:13 crc kubenswrapper[4870]: I0216 17:13:13.200606 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/caf40079-ce5b-448b-9167-3add7d8c7881-catalog-content\") pod \"redhat-operators-fq78l\" (UID: \"caf40079-ce5b-448b-9167-3add7d8c7881\") " pod="openshift-marketplace/redhat-operators-fq78l" Feb 16 17:13:13 crc kubenswrapper[4870]: I0216 17:13:13.200724 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8qjg\" (UniqueName: \"kubernetes.io/projected/caf40079-ce5b-448b-9167-3add7d8c7881-kube-api-access-x8qjg\") pod \"redhat-operators-fq78l\" (UID: \"caf40079-ce5b-448b-9167-3add7d8c7881\") " pod="openshift-marketplace/redhat-operators-fq78l" Feb 16 17:13:13 crc kubenswrapper[4870]: I0216 17:13:13.200753 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/caf40079-ce5b-448b-9167-3add7d8c7881-utilities\") pod \"redhat-operators-fq78l\" (UID: \"caf40079-ce5b-448b-9167-3add7d8c7881\") " pod="openshift-marketplace/redhat-operators-fq78l" Feb 16 17:13:13 crc kubenswrapper[4870]: I0216 17:13:13.201149 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/caf40079-ce5b-448b-9167-3add7d8c7881-catalog-content\") pod \"redhat-operators-fq78l\" (UID: \"caf40079-ce5b-448b-9167-3add7d8c7881\") " pod="openshift-marketplace/redhat-operators-fq78l" Feb 16 17:13:13 crc kubenswrapper[4870]: I0216 17:13:13.201250 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/caf40079-ce5b-448b-9167-3add7d8c7881-utilities\") pod \"redhat-operators-fq78l\" (UID: \"caf40079-ce5b-448b-9167-3add7d8c7881\") " pod="openshift-marketplace/redhat-operators-fq78l" Feb 16 17:13:13 crc kubenswrapper[4870]: I0216 17:13:13.235481 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8qjg\" (UniqueName: \"kubernetes.io/projected/caf40079-ce5b-448b-9167-3add7d8c7881-kube-api-access-x8qjg\") pod \"redhat-operators-fq78l\" (UID: \"caf40079-ce5b-448b-9167-3add7d8c7881\") " pod="openshift-marketplace/redhat-operators-fq78l" Feb 16 17:13:13 crc kubenswrapper[4870]: I0216 17:13:13.405803 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fq78l" Feb 16 17:13:13 crc kubenswrapper[4870]: I0216 17:13:13.840434 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fq78l"] Feb 16 17:13:13 crc kubenswrapper[4870]: W0216 17:13:13.854233 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcaf40079_ce5b_448b_9167_3add7d8c7881.slice/crio-34cb3a5ca04fcce768b611c5597a7fc67577c8dc05290a06b3b955bcb0374b5c WatchSource:0}: Error finding container 34cb3a5ca04fcce768b611c5597a7fc67577c8dc05290a06b3b955bcb0374b5c: Status 404 returned error can't find the container with id 34cb3a5ca04fcce768b611c5597a7fc67577c8dc05290a06b3b955bcb0374b5c Feb 16 17:13:13 crc kubenswrapper[4870]: I0216 17:13:13.942843 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 16 17:13:13 crc kubenswrapper[4870]: I0216 17:13:13.943849 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 17:13:13 crc kubenswrapper[4870]: I0216 17:13:13.946739 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 16 17:13:13 crc kubenswrapper[4870]: I0216 17:13:13.946916 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 16 17:13:13 crc kubenswrapper[4870]: I0216 17:13:13.951383 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 17:13:14 crc kubenswrapper[4870]: I0216 17:13:14.012482 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz6jv\" (UniqueName: \"kubernetes.io/projected/3d3740f6-a99d-4101-8f22-5d429cc635b7-kube-api-access-kz6jv\") pod \"minio\" (UID: \"3d3740f6-a99d-4101-8f22-5d429cc635b7\") " pod="minio-dev/minio" Feb 16 17:13:14 crc kubenswrapper[4870]: I0216 17:13:14.012558 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-935b6466-2bf5-45c7-9013-56c658cc246d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-935b6466-2bf5-45c7-9013-56c658cc246d\") pod \"minio\" (UID: \"3d3740f6-a99d-4101-8f22-5d429cc635b7\") " pod="minio-dev/minio" Feb 16 17:13:14 crc kubenswrapper[4870]: I0216 17:13:14.113938 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz6jv\" (UniqueName: \"kubernetes.io/projected/3d3740f6-a99d-4101-8f22-5d429cc635b7-kube-api-access-kz6jv\") pod \"minio\" (UID: \"3d3740f6-a99d-4101-8f22-5d429cc635b7\") " pod="minio-dev/minio" Feb 16 17:13:14 crc kubenswrapper[4870]: I0216 17:13:14.114041 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-935b6466-2bf5-45c7-9013-56c658cc246d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-935b6466-2bf5-45c7-9013-56c658cc246d\") pod \"minio\" (UID: \"3d3740f6-a99d-4101-8f22-5d429cc635b7\") " pod="minio-dev/minio" Feb 16 17:13:14 crc kubenswrapper[4870]: I0216 17:13:14.121048 4870 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:13:14 crc kubenswrapper[4870]: I0216 17:13:14.121116 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-935b6466-2bf5-45c7-9013-56c658cc246d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-935b6466-2bf5-45c7-9013-56c658cc246d\") pod \"minio\" (UID: \"3d3740f6-a99d-4101-8f22-5d429cc635b7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7117c218a91015112144a495e949bd3f6a76d8016d2dc0fb1922c39fc309a7e9/globalmount\"" pod="minio-dev/minio" Feb 16 17:13:14 crc kubenswrapper[4870]: I0216 17:13:14.134486 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz6jv\" (UniqueName: \"kubernetes.io/projected/3d3740f6-a99d-4101-8f22-5d429cc635b7-kube-api-access-kz6jv\") pod \"minio\" (UID: \"3d3740f6-a99d-4101-8f22-5d429cc635b7\") " pod="minio-dev/minio" Feb 16 17:13:14 crc kubenswrapper[4870]: I0216 17:13:14.149031 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-935b6466-2bf5-45c7-9013-56c658cc246d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-935b6466-2bf5-45c7-9013-56c658cc246d\") pod \"minio\" (UID: \"3d3740f6-a99d-4101-8f22-5d429cc635b7\") " pod="minio-dev/minio" Feb 16 17:13:14 crc kubenswrapper[4870]: I0216 17:13:14.257700 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 17:13:14 crc kubenswrapper[4870]: I0216 17:13:14.726318 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 17:13:14 crc kubenswrapper[4870]: I0216 17:13:14.826284 4870 generic.go:334] "Generic (PLEG): container finished" podID="caf40079-ce5b-448b-9167-3add7d8c7881" containerID="626b79d4dd62e65c00e909c600226eaf8195a895a94d9bed34112043542540f3" exitCode=0 Feb 16 17:13:14 crc kubenswrapper[4870]: I0216 17:13:14.826360 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fq78l" event={"ID":"caf40079-ce5b-448b-9167-3add7d8c7881","Type":"ContainerDied","Data":"626b79d4dd62e65c00e909c600226eaf8195a895a94d9bed34112043542540f3"} Feb 16 17:13:14 crc kubenswrapper[4870]: I0216 17:13:14.826439 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fq78l" event={"ID":"caf40079-ce5b-448b-9167-3add7d8c7881","Type":"ContainerStarted","Data":"34cb3a5ca04fcce768b611c5597a7fc67577c8dc05290a06b3b955bcb0374b5c"} Feb 16 17:13:14 crc kubenswrapper[4870]: I0216 17:13:14.829546 4870 generic.go:334] "Generic (PLEG): container finished" podID="56929b47-8ff7-4aed-83a9-781ca5cf1c4a" containerID="a78cc2342572ecbdeb4f18f756a17b50787038bceb8919f67e44ab9019d90f94" exitCode=0 Feb 16 17:13:14 crc kubenswrapper[4870]: I0216 17:13:14.829623 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm" event={"ID":"56929b47-8ff7-4aed-83a9-781ca5cf1c4a","Type":"ContainerDied","Data":"a78cc2342572ecbdeb4f18f756a17b50787038bceb8919f67e44ab9019d90f94"} Feb 16 17:13:14 crc kubenswrapper[4870]: I0216 17:13:14.833869 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"3d3740f6-a99d-4101-8f22-5d429cc635b7","Type":"ContainerStarted","Data":"d160523496b2bf8dfbcb9bd29eaab018616260bd6a683301126408d217e3ede6"} Feb 16 17:13:15 crc kubenswrapper[4870]: I0216 17:13:15.840815 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fq78l" event={"ID":"caf40079-ce5b-448b-9167-3add7d8c7881","Type":"ContainerStarted","Data":"39ad77c2a1ee988c0ea1ccdf86d2108e8fd69daf296642c9349eaa5cb8544621"} Feb 16 17:13:15 crc kubenswrapper[4870]: I0216 17:13:15.844891 4870 generic.go:334] "Generic (PLEG): container finished" podID="56929b47-8ff7-4aed-83a9-781ca5cf1c4a" containerID="01e7e3abb2a9b36b439cc3f44e9c79034cca3c5be90c7ac052841c5b0eea4b0f" exitCode=0 Feb 16 17:13:15 crc kubenswrapper[4870]: I0216 17:13:15.845000 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm" event={"ID":"56929b47-8ff7-4aed-83a9-781ca5cf1c4a","Type":"ContainerDied","Data":"01e7e3abb2a9b36b439cc3f44e9c79034cca3c5be90c7ac052841c5b0eea4b0f"} Feb 16 17:13:16 crc kubenswrapper[4870]: I0216 17:13:16.861012 4870 generic.go:334] "Generic (PLEG): container finished" podID="caf40079-ce5b-448b-9167-3add7d8c7881" containerID="39ad77c2a1ee988c0ea1ccdf86d2108e8fd69daf296642c9349eaa5cb8544621" exitCode=0 Feb 16 17:13:16 crc kubenswrapper[4870]: I0216 17:13:16.861127 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fq78l" event={"ID":"caf40079-ce5b-448b-9167-3add7d8c7881","Type":"ContainerDied","Data":"39ad77c2a1ee988c0ea1ccdf86d2108e8fd69daf296642c9349eaa5cb8544621"} Feb 16 17:13:17 crc kubenswrapper[4870]: I0216 17:13:17.544006 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm" Feb 16 17:13:17 crc kubenswrapper[4870]: I0216 17:13:17.663754 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv8lf\" (UniqueName: \"kubernetes.io/projected/56929b47-8ff7-4aed-83a9-781ca5cf1c4a-kube-api-access-dv8lf\") pod \"56929b47-8ff7-4aed-83a9-781ca5cf1c4a\" (UID: \"56929b47-8ff7-4aed-83a9-781ca5cf1c4a\") " Feb 16 17:13:17 crc kubenswrapper[4870]: I0216 17:13:17.663865 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/56929b47-8ff7-4aed-83a9-781ca5cf1c4a-bundle\") pod \"56929b47-8ff7-4aed-83a9-781ca5cf1c4a\" (UID: \"56929b47-8ff7-4aed-83a9-781ca5cf1c4a\") " Feb 16 17:13:17 crc kubenswrapper[4870]: I0216 17:13:17.663905 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/56929b47-8ff7-4aed-83a9-781ca5cf1c4a-util\") pod \"56929b47-8ff7-4aed-83a9-781ca5cf1c4a\" (UID: \"56929b47-8ff7-4aed-83a9-781ca5cf1c4a\") " Feb 16 17:13:17 crc kubenswrapper[4870]: I0216 17:13:17.666113 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56929b47-8ff7-4aed-83a9-781ca5cf1c4a-bundle" (OuterVolumeSpecName: "bundle") pod "56929b47-8ff7-4aed-83a9-781ca5cf1c4a" (UID: "56929b47-8ff7-4aed-83a9-781ca5cf1c4a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:13:17 crc kubenswrapper[4870]: I0216 17:13:17.668853 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56929b47-8ff7-4aed-83a9-781ca5cf1c4a-kube-api-access-dv8lf" (OuterVolumeSpecName: "kube-api-access-dv8lf") pod "56929b47-8ff7-4aed-83a9-781ca5cf1c4a" (UID: "56929b47-8ff7-4aed-83a9-781ca5cf1c4a"). InnerVolumeSpecName "kube-api-access-dv8lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:13:17 crc kubenswrapper[4870]: I0216 17:13:17.677145 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56929b47-8ff7-4aed-83a9-781ca5cf1c4a-util" (OuterVolumeSpecName: "util") pod "56929b47-8ff7-4aed-83a9-781ca5cf1c4a" (UID: "56929b47-8ff7-4aed-83a9-781ca5cf1c4a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:13:17 crc kubenswrapper[4870]: I0216 17:13:17.764809 4870 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/56929b47-8ff7-4aed-83a9-781ca5cf1c4a-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:13:17 crc kubenswrapper[4870]: I0216 17:13:17.764840 4870 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/56929b47-8ff7-4aed-83a9-781ca5cf1c4a-util\") on node \"crc\" DevicePath \"\"" Feb 16 17:13:17 crc kubenswrapper[4870]: I0216 17:13:17.764849 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv8lf\" (UniqueName: \"kubernetes.io/projected/56929b47-8ff7-4aed-83a9-781ca5cf1c4a-kube-api-access-dv8lf\") on node \"crc\" DevicePath \"\"" Feb 16 17:13:17 crc kubenswrapper[4870]: I0216 17:13:17.869108 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm" event={"ID":"56929b47-8ff7-4aed-83a9-781ca5cf1c4a","Type":"ContainerDied","Data":"bedfc1b7b56a12d602309613479c7c0abbee9f54aa47fdce2949185d86a8bc5e"} Feb 16 17:13:17 crc kubenswrapper[4870]: I0216 17:13:17.869147 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bedfc1b7b56a12d602309613479c7c0abbee9f54aa47fdce2949185d86a8bc5e" Feb 16 17:13:17 crc kubenswrapper[4870]: I0216 17:13:17.869209 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm" Feb 16 17:13:18 crc kubenswrapper[4870]: I0216 17:13:18.879336 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"3d3740f6-a99d-4101-8f22-5d429cc635b7","Type":"ContainerStarted","Data":"36504bec69ba7cf042effa0be8ca9c2088dd1b340aaba937e63c082468328804"} Feb 16 17:13:18 crc kubenswrapper[4870]: I0216 17:13:18.884115 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fq78l" event={"ID":"caf40079-ce5b-448b-9167-3add7d8c7881","Type":"ContainerStarted","Data":"d2993b4c0ee7c5ea7f8a3236fc6be8c714c306ef3c16240a7c92439b89fbc08c"} Feb 16 17:13:18 crc kubenswrapper[4870]: I0216 17:13:18.908580 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.906472388 podStartE2EDuration="7.90854369s" podCreationTimestamp="2026-02-16 17:13:11 +0000 UTC" firstStartedPulling="2026-02-16 17:13:14.73336122 +0000 UTC m=+799.216825614" lastFinishedPulling="2026-02-16 17:13:17.735432532 +0000 UTC m=+802.218896916" observedRunningTime="2026-02-16 17:13:18.895801043 +0000 UTC m=+803.379265437" watchObservedRunningTime="2026-02-16 17:13:18.90854369 +0000 UTC m=+803.392008124" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.158250 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fq78l" podStartSLOduration=7.256124118 podStartE2EDuration="10.158230543s" podCreationTimestamp="2026-02-16 17:13:13 +0000 UTC" firstStartedPulling="2026-02-16 17:13:14.828008184 +0000 UTC m=+799.311472578" lastFinishedPulling="2026-02-16 17:13:17.730114619 +0000 UTC m=+802.213579003" observedRunningTime="2026-02-16 17:13:18.925483457 +0000 UTC m=+803.408947881" watchObservedRunningTime="2026-02-16 17:13:23.158230543 +0000 UTC m=+807.641694947" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.159312 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll"] Feb 16 17:13:23 crc kubenswrapper[4870]: E0216 17:13:23.159615 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56929b47-8ff7-4aed-83a9-781ca5cf1c4a" containerName="pull" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.159637 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="56929b47-8ff7-4aed-83a9-781ca5cf1c4a" containerName="pull" Feb 16 17:13:23 crc kubenswrapper[4870]: E0216 17:13:23.159664 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56929b47-8ff7-4aed-83a9-781ca5cf1c4a" containerName="util" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.159676 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="56929b47-8ff7-4aed-83a9-781ca5cf1c4a" containerName="util" Feb 16 17:13:23 crc kubenswrapper[4870]: E0216 17:13:23.159695 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56929b47-8ff7-4aed-83a9-781ca5cf1c4a" containerName="extract" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.159703 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="56929b47-8ff7-4aed-83a9-781ca5cf1c4a" containerName="extract" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.159833 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="56929b47-8ff7-4aed-83a9-781ca5cf1c4a" containerName="extract" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.160635 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.163167 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.163658 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.163672 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.163672 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.164013 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.164554 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-s9nkb" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.170615 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll"] Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.244218 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw5k6\" (UniqueName: \"kubernetes.io/projected/79183cb8-6455-4b66-9732-d3eb9604ab48-kube-api-access-gw5k6\") pod \"loki-operator-controller-manager-797c678dc4-pv4ll\" (UID: \"79183cb8-6455-4b66-9732-d3eb9604ab48\") " pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.244281 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79183cb8-6455-4b66-9732-d3eb9604ab48-apiservice-cert\") pod \"loki-operator-controller-manager-797c678dc4-pv4ll\" (UID: \"79183cb8-6455-4b66-9732-d3eb9604ab48\") " pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.244343 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/79183cb8-6455-4b66-9732-d3eb9604ab48-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-797c678dc4-pv4ll\" (UID: \"79183cb8-6455-4b66-9732-d3eb9604ab48\") " pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.244360 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/79183cb8-6455-4b66-9732-d3eb9604ab48-manager-config\") pod \"loki-operator-controller-manager-797c678dc4-pv4ll\" (UID: \"79183cb8-6455-4b66-9732-d3eb9604ab48\") " pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.244385 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79183cb8-6455-4b66-9732-d3eb9604ab48-webhook-cert\") pod \"loki-operator-controller-manager-797c678dc4-pv4ll\" (UID: \"79183cb8-6455-4b66-9732-d3eb9604ab48\") " pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.345932 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw5k6\" (UniqueName: \"kubernetes.io/projected/79183cb8-6455-4b66-9732-d3eb9604ab48-kube-api-access-gw5k6\") pod \"loki-operator-controller-manager-797c678dc4-pv4ll\" (UID: \"79183cb8-6455-4b66-9732-d3eb9604ab48\") " pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.346009 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79183cb8-6455-4b66-9732-d3eb9604ab48-apiservice-cert\") pod \"loki-operator-controller-manager-797c678dc4-pv4ll\" (UID: \"79183cb8-6455-4b66-9732-d3eb9604ab48\") " pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.346105 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/79183cb8-6455-4b66-9732-d3eb9604ab48-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-797c678dc4-pv4ll\" (UID: \"79183cb8-6455-4b66-9732-d3eb9604ab48\") " pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.346129 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/79183cb8-6455-4b66-9732-d3eb9604ab48-manager-config\") pod \"loki-operator-controller-manager-797c678dc4-pv4ll\" (UID: \"79183cb8-6455-4b66-9732-d3eb9604ab48\") " pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.346186 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79183cb8-6455-4b66-9732-d3eb9604ab48-webhook-cert\") pod \"loki-operator-controller-manager-797c678dc4-pv4ll\" (UID: \"79183cb8-6455-4b66-9732-d3eb9604ab48\") " pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.348106 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/79183cb8-6455-4b66-9732-d3eb9604ab48-manager-config\") pod \"loki-operator-controller-manager-797c678dc4-pv4ll\" (UID: \"79183cb8-6455-4b66-9732-d3eb9604ab48\") " pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.354278 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79183cb8-6455-4b66-9732-d3eb9604ab48-apiservice-cert\") pod \"loki-operator-controller-manager-797c678dc4-pv4ll\" (UID: \"79183cb8-6455-4b66-9732-d3eb9604ab48\") " pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.354302 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79183cb8-6455-4b66-9732-d3eb9604ab48-webhook-cert\") pod \"loki-operator-controller-manager-797c678dc4-pv4ll\" (UID: \"79183cb8-6455-4b66-9732-d3eb9604ab48\") " pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.370858 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw5k6\" (UniqueName: \"kubernetes.io/projected/79183cb8-6455-4b66-9732-d3eb9604ab48-kube-api-access-gw5k6\") pod \"loki-operator-controller-manager-797c678dc4-pv4ll\" (UID: \"79183cb8-6455-4b66-9732-d3eb9604ab48\") " pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.371886 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/79183cb8-6455-4b66-9732-d3eb9604ab48-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-797c678dc4-pv4ll\" (UID: \"79183cb8-6455-4b66-9732-d3eb9604ab48\") " pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.406818 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fq78l" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.406860 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fq78l" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.477775 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.778624 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll"] Feb 16 17:13:23 crc kubenswrapper[4870]: I0216 17:13:23.914583 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" event={"ID":"79183cb8-6455-4b66-9732-d3eb9604ab48","Type":"ContainerStarted","Data":"6f9de75b635080dee48e9351ad68401b909a3a626de84c2b82c935f7ef58d0d2"} Feb 16 17:13:24 crc kubenswrapper[4870]: I0216 17:13:24.501557 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fq78l" podUID="caf40079-ce5b-448b-9167-3add7d8c7881" containerName="registry-server" probeResult="failure" output=< Feb 16 17:13:24 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Feb 16 17:13:24 crc kubenswrapper[4870]: > Feb 16 17:13:28 crc kubenswrapper[4870]: I0216 17:13:28.943417 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" event={"ID":"79183cb8-6455-4b66-9732-d3eb9604ab48","Type":"ContainerStarted","Data":"4150b76ac93b273a1e4e6214d04a4c77246e235ede1578b61aeaa2b82adaa536"} Feb 16 17:13:33 crc kubenswrapper[4870]: I0216 17:13:33.464027 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fq78l" Feb 16 17:13:33 crc kubenswrapper[4870]: I0216 17:13:33.539974 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fq78l" Feb 16 17:13:34 crc kubenswrapper[4870]: I0216 17:13:34.022374 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fq78l"] Feb 16 17:13:34 crc kubenswrapper[4870]: I0216 17:13:34.982015 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fq78l" podUID="caf40079-ce5b-448b-9167-3add7d8c7881" containerName="registry-server" containerID="cri-o://d2993b4c0ee7c5ea7f8a3236fc6be8c714c306ef3c16240a7c92439b89fbc08c" gracePeriod=2 Feb 16 17:13:35 crc kubenswrapper[4870]: I0216 17:13:35.989868 4870 generic.go:334] "Generic (PLEG): container finished" podID="caf40079-ce5b-448b-9167-3add7d8c7881" containerID="d2993b4c0ee7c5ea7f8a3236fc6be8c714c306ef3c16240a7c92439b89fbc08c" exitCode=0 Feb 16 17:13:35 crc kubenswrapper[4870]: I0216 17:13:35.989964 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fq78l" event={"ID":"caf40079-ce5b-448b-9167-3add7d8c7881","Type":"ContainerDied","Data":"d2993b4c0ee7c5ea7f8a3236fc6be8c714c306ef3c16240a7c92439b89fbc08c"} Feb 16 17:13:36 crc kubenswrapper[4870]: I0216 17:13:36.385517 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fq78l" Feb 16 17:13:36 crc kubenswrapper[4870]: I0216 17:13:36.537652 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/caf40079-ce5b-448b-9167-3add7d8c7881-utilities\") pod \"caf40079-ce5b-448b-9167-3add7d8c7881\" (UID: \"caf40079-ce5b-448b-9167-3add7d8c7881\") " Feb 16 17:13:36 crc kubenswrapper[4870]: I0216 17:13:36.537762 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8qjg\" (UniqueName: \"kubernetes.io/projected/caf40079-ce5b-448b-9167-3add7d8c7881-kube-api-access-x8qjg\") pod \"caf40079-ce5b-448b-9167-3add7d8c7881\" (UID: \"caf40079-ce5b-448b-9167-3add7d8c7881\") " Feb 16 17:13:36 crc kubenswrapper[4870]: I0216 17:13:36.537801 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/caf40079-ce5b-448b-9167-3add7d8c7881-catalog-content\") pod \"caf40079-ce5b-448b-9167-3add7d8c7881\" (UID: \"caf40079-ce5b-448b-9167-3add7d8c7881\") " Feb 16 17:13:36 crc kubenswrapper[4870]: I0216 17:13:36.538621 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caf40079-ce5b-448b-9167-3add7d8c7881-utilities" (OuterVolumeSpecName: "utilities") pod "caf40079-ce5b-448b-9167-3add7d8c7881" (UID: "caf40079-ce5b-448b-9167-3add7d8c7881"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:13:36 crc kubenswrapper[4870]: I0216 17:13:36.543513 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caf40079-ce5b-448b-9167-3add7d8c7881-kube-api-access-x8qjg" (OuterVolumeSpecName: "kube-api-access-x8qjg") pod "caf40079-ce5b-448b-9167-3add7d8c7881" (UID: "caf40079-ce5b-448b-9167-3add7d8c7881"). InnerVolumeSpecName "kube-api-access-x8qjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:13:36 crc kubenswrapper[4870]: I0216 17:13:36.639029 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/caf40079-ce5b-448b-9167-3add7d8c7881-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:13:36 crc kubenswrapper[4870]: I0216 17:13:36.639068 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8qjg\" (UniqueName: \"kubernetes.io/projected/caf40079-ce5b-448b-9167-3add7d8c7881-kube-api-access-x8qjg\") on node \"crc\" DevicePath \"\"" Feb 16 17:13:36 crc kubenswrapper[4870]: I0216 17:13:36.697132 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caf40079-ce5b-448b-9167-3add7d8c7881-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "caf40079-ce5b-448b-9167-3add7d8c7881" (UID: "caf40079-ce5b-448b-9167-3add7d8c7881"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:13:36 crc kubenswrapper[4870]: I0216 17:13:36.739889 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/caf40079-ce5b-448b-9167-3add7d8c7881-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:13:36 crc kubenswrapper[4870]: I0216 17:13:36.997782 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fq78l" event={"ID":"caf40079-ce5b-448b-9167-3add7d8c7881","Type":"ContainerDied","Data":"34cb3a5ca04fcce768b611c5597a7fc67577c8dc05290a06b3b955bcb0374b5c"} Feb 16 17:13:36 crc kubenswrapper[4870]: I0216 17:13:36.997819 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fq78l" Feb 16 17:13:36 crc kubenswrapper[4870]: I0216 17:13:36.997850 4870 scope.go:117] "RemoveContainer" containerID="d2993b4c0ee7c5ea7f8a3236fc6be8c714c306ef3c16240a7c92439b89fbc08c" Feb 16 17:13:37 crc kubenswrapper[4870]: I0216 17:13:37.000292 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" event={"ID":"79183cb8-6455-4b66-9732-d3eb9604ab48","Type":"ContainerStarted","Data":"3a9ad40a7515ce5d9c4df7356155a1a5c0f6cb2456499e8214532b2d37cbadfd"} Feb 16 17:13:37 crc kubenswrapper[4870]: I0216 17:13:37.001112 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" Feb 16 17:13:37 crc kubenswrapper[4870]: I0216 17:13:37.005545 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" Feb 16 17:13:37 crc kubenswrapper[4870]: I0216 17:13:37.018186 4870 scope.go:117] "RemoveContainer" containerID="39ad77c2a1ee988c0ea1ccdf86d2108e8fd69daf296642c9349eaa5cb8544621" Feb 16 17:13:37 crc kubenswrapper[4870]: I0216 17:13:37.032296 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-797c678dc4-pv4ll" podStartSLOduration=1.566590691 podStartE2EDuration="14.032276363s" podCreationTimestamp="2026-02-16 17:13:23 +0000 UTC" firstStartedPulling="2026-02-16 17:13:23.785295662 +0000 UTC m=+808.268760056" lastFinishedPulling="2026-02-16 17:13:36.250981344 +0000 UTC m=+820.734445728" observedRunningTime="2026-02-16 17:13:37.029526624 +0000 UTC m=+821.512991008" watchObservedRunningTime="2026-02-16 17:13:37.032276363 +0000 UTC m=+821.515740747" Feb 16 17:13:37 crc kubenswrapper[4870]: I0216 17:13:37.058158 4870 scope.go:117] "RemoveContainer" containerID="626b79d4dd62e65c00e909c600226eaf8195a895a94d9bed34112043542540f3" Feb 16 17:13:37 crc kubenswrapper[4870]: I0216 17:13:37.091972 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fq78l"] Feb 16 17:13:37 crc kubenswrapper[4870]: I0216 17:13:37.100460 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fq78l"] Feb 16 17:13:38 crc kubenswrapper[4870]: I0216 17:13:38.230936 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caf40079-ce5b-448b-9167-3add7d8c7881" path="/var/lib/kubelet/pods/caf40079-ce5b-448b-9167-3add7d8c7881/volumes" Feb 16 17:14:08 crc kubenswrapper[4870]: I0216 17:14:08.701073 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45"] Feb 16 17:14:08 crc kubenswrapper[4870]: E0216 17:14:08.701646 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caf40079-ce5b-448b-9167-3add7d8c7881" containerName="registry-server" Feb 16 17:14:08 crc kubenswrapper[4870]: I0216 17:14:08.701658 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="caf40079-ce5b-448b-9167-3add7d8c7881" containerName="registry-server" Feb 16 17:14:08 crc kubenswrapper[4870]: E0216 17:14:08.701672 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caf40079-ce5b-448b-9167-3add7d8c7881" containerName="extract-utilities" Feb 16 17:14:08 crc kubenswrapper[4870]: I0216 17:14:08.701678 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="caf40079-ce5b-448b-9167-3add7d8c7881" containerName="extract-utilities" Feb 16 17:14:08 crc kubenswrapper[4870]: E0216 17:14:08.701686 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caf40079-ce5b-448b-9167-3add7d8c7881" containerName="extract-content" Feb 16 17:14:08 crc kubenswrapper[4870]: I0216 17:14:08.701692 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="caf40079-ce5b-448b-9167-3add7d8c7881" containerName="extract-content" Feb 16 17:14:08 crc kubenswrapper[4870]: I0216 17:14:08.701803 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="caf40079-ce5b-448b-9167-3add7d8c7881" containerName="registry-server" Feb 16 17:14:08 crc kubenswrapper[4870]: I0216 17:14:08.702506 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45" Feb 16 17:14:08 crc kubenswrapper[4870]: I0216 17:14:08.705979 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 17:14:08 crc kubenswrapper[4870]: I0216 17:14:08.712929 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45"] Feb 16 17:14:08 crc kubenswrapper[4870]: I0216 17:14:08.782898 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwzvc\" (UniqueName: \"kubernetes.io/projected/428b9999-2e8a-4a52-9f89-c71abd6cd8a2-kube-api-access-xwzvc\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45\" (UID: \"428b9999-2e8a-4a52-9f89-c71abd6cd8a2\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45" Feb 16 17:14:08 crc kubenswrapper[4870]: I0216 17:14:08.783388 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/428b9999-2e8a-4a52-9f89-c71abd6cd8a2-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45\" (UID: \"428b9999-2e8a-4a52-9f89-c71abd6cd8a2\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45" Feb 16 17:14:08 crc kubenswrapper[4870]: I0216 17:14:08.783426 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/428b9999-2e8a-4a52-9f89-c71abd6cd8a2-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45\" (UID: \"428b9999-2e8a-4a52-9f89-c71abd6cd8a2\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45" Feb 16 17:14:08 crc kubenswrapper[4870]: I0216 17:14:08.884901 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwzvc\" (UniqueName: \"kubernetes.io/projected/428b9999-2e8a-4a52-9f89-c71abd6cd8a2-kube-api-access-xwzvc\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45\" (UID: \"428b9999-2e8a-4a52-9f89-c71abd6cd8a2\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45" Feb 16 17:14:08 crc kubenswrapper[4870]: I0216 17:14:08.884988 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/428b9999-2e8a-4a52-9f89-c71abd6cd8a2-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45\" (UID: \"428b9999-2e8a-4a52-9f89-c71abd6cd8a2\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45" Feb 16 17:14:08 crc kubenswrapper[4870]: I0216 17:14:08.885037 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/428b9999-2e8a-4a52-9f89-c71abd6cd8a2-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45\" (UID: \"428b9999-2e8a-4a52-9f89-c71abd6cd8a2\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45" Feb 16 17:14:08 crc kubenswrapper[4870]: I0216 17:14:08.885697 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/428b9999-2e8a-4a52-9f89-c71abd6cd8a2-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45\" (UID: \"428b9999-2e8a-4a52-9f89-c71abd6cd8a2\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45" Feb 16 17:14:08 crc kubenswrapper[4870]: I0216 17:14:08.885707 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/428b9999-2e8a-4a52-9f89-c71abd6cd8a2-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45\" (UID: \"428b9999-2e8a-4a52-9f89-c71abd6cd8a2\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45" Feb 16 17:14:08 crc kubenswrapper[4870]: I0216 17:14:08.907213 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwzvc\" (UniqueName: \"kubernetes.io/projected/428b9999-2e8a-4a52-9f89-c71abd6cd8a2-kube-api-access-xwzvc\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45\" (UID: \"428b9999-2e8a-4a52-9f89-c71abd6cd8a2\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45" Feb 16 17:14:09 crc kubenswrapper[4870]: I0216 17:14:09.065426 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45" Feb 16 17:14:09 crc kubenswrapper[4870]: I0216 17:14:09.536280 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45"] Feb 16 17:14:10 crc kubenswrapper[4870]: I0216 17:14:10.233298 4870 generic.go:334] "Generic (PLEG): container finished" podID="428b9999-2e8a-4a52-9f89-c71abd6cd8a2" containerID="39fa69d1341608fee6ea7194b040ba5fbcb5f3a7b229f80b1dc1c32b74582a3f" exitCode=0 Feb 16 17:14:10 crc kubenswrapper[4870]: I0216 17:14:10.233339 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45" event={"ID":"428b9999-2e8a-4a52-9f89-c71abd6cd8a2","Type":"ContainerDied","Data":"39fa69d1341608fee6ea7194b040ba5fbcb5f3a7b229f80b1dc1c32b74582a3f"} Feb 16 17:14:10 crc kubenswrapper[4870]: I0216 17:14:10.233596 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45" event={"ID":"428b9999-2e8a-4a52-9f89-c71abd6cd8a2","Type":"ContainerStarted","Data":"160741bd68c188580c6d0a520de29797cd2a07b459414656002a27d557d9fdf4"} Feb 16 17:14:12 crc kubenswrapper[4870]: I0216 17:14:12.245456 4870 generic.go:334] "Generic (PLEG): container finished" podID="428b9999-2e8a-4a52-9f89-c71abd6cd8a2" containerID="ce4d70b433851562665e5eca656c70e4ff65437a0ef867d46d83f2487055aea1" exitCode=0 Feb 16 17:14:12 crc kubenswrapper[4870]: I0216 17:14:12.245521 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45" event={"ID":"428b9999-2e8a-4a52-9f89-c71abd6cd8a2","Type":"ContainerDied","Data":"ce4d70b433851562665e5eca656c70e4ff65437a0ef867d46d83f2487055aea1"} Feb 16 17:14:13 crc kubenswrapper[4870]: I0216 17:14:13.256763 4870 generic.go:334] "Generic (PLEG): container finished" podID="428b9999-2e8a-4a52-9f89-c71abd6cd8a2" containerID="1142e3ad07e49a9009fcdb228ad2734960384911a6bd0a3c1000574cac544997" exitCode=0 Feb 16 17:14:13 crc kubenswrapper[4870]: I0216 17:14:13.256852 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45" event={"ID":"428b9999-2e8a-4a52-9f89-c71abd6cd8a2","Type":"ContainerDied","Data":"1142e3ad07e49a9009fcdb228ad2734960384911a6bd0a3c1000574cac544997"} Feb 16 17:14:14 crc kubenswrapper[4870]: I0216 17:14:14.526061 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45" Feb 16 17:14:14 crc kubenswrapper[4870]: I0216 17:14:14.558855 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/428b9999-2e8a-4a52-9f89-c71abd6cd8a2-bundle\") pod \"428b9999-2e8a-4a52-9f89-c71abd6cd8a2\" (UID: \"428b9999-2e8a-4a52-9f89-c71abd6cd8a2\") " Feb 16 17:14:14 crc kubenswrapper[4870]: I0216 17:14:14.558913 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwzvc\" (UniqueName: \"kubernetes.io/projected/428b9999-2e8a-4a52-9f89-c71abd6cd8a2-kube-api-access-xwzvc\") pod \"428b9999-2e8a-4a52-9f89-c71abd6cd8a2\" (UID: \"428b9999-2e8a-4a52-9f89-c71abd6cd8a2\") " Feb 16 17:14:14 crc kubenswrapper[4870]: I0216 17:14:14.559025 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/428b9999-2e8a-4a52-9f89-c71abd6cd8a2-util\") pod \"428b9999-2e8a-4a52-9f89-c71abd6cd8a2\" (UID: \"428b9999-2e8a-4a52-9f89-c71abd6cd8a2\") " Feb 16 17:14:14 crc kubenswrapper[4870]: I0216 17:14:14.560256 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/428b9999-2e8a-4a52-9f89-c71abd6cd8a2-bundle" (OuterVolumeSpecName: "bundle") pod "428b9999-2e8a-4a52-9f89-c71abd6cd8a2" (UID: "428b9999-2e8a-4a52-9f89-c71abd6cd8a2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:14:14 crc kubenswrapper[4870]: I0216 17:14:14.564716 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/428b9999-2e8a-4a52-9f89-c71abd6cd8a2-kube-api-access-xwzvc" (OuterVolumeSpecName: "kube-api-access-xwzvc") pod "428b9999-2e8a-4a52-9f89-c71abd6cd8a2" (UID: "428b9999-2e8a-4a52-9f89-c71abd6cd8a2"). InnerVolumeSpecName "kube-api-access-xwzvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:14:14 crc kubenswrapper[4870]: I0216 17:14:14.574423 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/428b9999-2e8a-4a52-9f89-c71abd6cd8a2-util" (OuterVolumeSpecName: "util") pod "428b9999-2e8a-4a52-9f89-c71abd6cd8a2" (UID: "428b9999-2e8a-4a52-9f89-c71abd6cd8a2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:14:14 crc kubenswrapper[4870]: I0216 17:14:14.660616 4870 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/428b9999-2e8a-4a52-9f89-c71abd6cd8a2-util\") on node \"crc\" DevicePath \"\"" Feb 16 17:14:14 crc kubenswrapper[4870]: I0216 17:14:14.660659 4870 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/428b9999-2e8a-4a52-9f89-c71abd6cd8a2-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:14:14 crc kubenswrapper[4870]: I0216 17:14:14.660669 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwzvc\" (UniqueName: \"kubernetes.io/projected/428b9999-2e8a-4a52-9f89-c71abd6cd8a2-kube-api-access-xwzvc\") on node \"crc\" DevicePath \"\"" Feb 16 17:14:15 crc kubenswrapper[4870]: I0216 17:14:15.272245 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45" event={"ID":"428b9999-2e8a-4a52-9f89-c71abd6cd8a2","Type":"ContainerDied","Data":"160741bd68c188580c6d0a520de29797cd2a07b459414656002a27d557d9fdf4"} Feb 16 17:14:15 crc kubenswrapper[4870]: I0216 17:14:15.272591 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="160741bd68c188580c6d0a520de29797cd2a07b459414656002a27d557d9fdf4" Feb 16 17:14:15 crc kubenswrapper[4870]: I0216 17:14:15.272548 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45" Feb 16 17:14:20 crc kubenswrapper[4870]: I0216 17:14:20.552226 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-6vj2z"] Feb 16 17:14:20 crc kubenswrapper[4870]: E0216 17:14:20.552755 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="428b9999-2e8a-4a52-9f89-c71abd6cd8a2" containerName="extract" Feb 16 17:14:20 crc kubenswrapper[4870]: I0216 17:14:20.552770 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="428b9999-2e8a-4a52-9f89-c71abd6cd8a2" containerName="extract" Feb 16 17:14:20 crc kubenswrapper[4870]: E0216 17:14:20.552786 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="428b9999-2e8a-4a52-9f89-c71abd6cd8a2" containerName="pull" Feb 16 17:14:20 crc kubenswrapper[4870]: I0216 17:14:20.552810 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="428b9999-2e8a-4a52-9f89-c71abd6cd8a2" containerName="pull" Feb 16 17:14:20 crc kubenswrapper[4870]: E0216 17:14:20.552821 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="428b9999-2e8a-4a52-9f89-c71abd6cd8a2" containerName="util" Feb 16 17:14:20 crc kubenswrapper[4870]: I0216 17:14:20.552831 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="428b9999-2e8a-4a52-9f89-c71abd6cd8a2" containerName="util" Feb 16 17:14:20 crc kubenswrapper[4870]: I0216 17:14:20.552976 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="428b9999-2e8a-4a52-9f89-c71abd6cd8a2" containerName="extract" Feb 16 17:14:20 crc kubenswrapper[4870]: I0216 17:14:20.553450 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-6vj2z" Feb 16 17:14:20 crc kubenswrapper[4870]: I0216 17:14:20.558125 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 16 17:14:20 crc kubenswrapper[4870]: I0216 17:14:20.558497 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-t8htl" Feb 16 17:14:20 crc kubenswrapper[4870]: I0216 17:14:20.558698 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 16 17:14:20 crc kubenswrapper[4870]: I0216 17:14:20.570931 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-6vj2z"] Feb 16 17:14:20 crc kubenswrapper[4870]: I0216 17:14:20.655349 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64fhw\" (UniqueName: \"kubernetes.io/projected/bd8f5c2a-410f-40e4-9672-272f95aacea1-kube-api-access-64fhw\") pod \"nmstate-operator-694c9596b7-6vj2z\" (UID: \"bd8f5c2a-410f-40e4-9672-272f95aacea1\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-6vj2z" Feb 16 17:14:20 crc kubenswrapper[4870]: I0216 17:14:20.756004 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64fhw\" (UniqueName: \"kubernetes.io/projected/bd8f5c2a-410f-40e4-9672-272f95aacea1-kube-api-access-64fhw\") pod \"nmstate-operator-694c9596b7-6vj2z\" (UID: \"bd8f5c2a-410f-40e4-9672-272f95aacea1\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-6vj2z" Feb 16 17:14:20 crc kubenswrapper[4870]: I0216 17:14:20.785281 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64fhw\" (UniqueName: \"kubernetes.io/projected/bd8f5c2a-410f-40e4-9672-272f95aacea1-kube-api-access-64fhw\") pod \"nmstate-operator-694c9596b7-6vj2z\" (UID: \"bd8f5c2a-410f-40e4-9672-272f95aacea1\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-6vj2z" Feb 16 17:14:20 crc kubenswrapper[4870]: I0216 17:14:20.872206 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-6vj2z" Feb 16 17:14:21 crc kubenswrapper[4870]: I0216 17:14:21.104660 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-6vj2z"] Feb 16 17:14:21 crc kubenswrapper[4870]: W0216 17:14:21.118355 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd8f5c2a_410f_40e4_9672_272f95aacea1.slice/crio-bccafc53dc1044278ba522802435419f6ec044bb66233557b943c07909a87dd3 WatchSource:0}: Error finding container bccafc53dc1044278ba522802435419f6ec044bb66233557b943c07909a87dd3: Status 404 returned error can't find the container with id bccafc53dc1044278ba522802435419f6ec044bb66233557b943c07909a87dd3 Feb 16 17:14:21 crc kubenswrapper[4870]: I0216 17:14:21.306827 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-6vj2z" event={"ID":"bd8f5c2a-410f-40e4-9672-272f95aacea1","Type":"ContainerStarted","Data":"bccafc53dc1044278ba522802435419f6ec044bb66233557b943c07909a87dd3"} Feb 16 17:14:24 crc kubenswrapper[4870]: I0216 17:14:24.323652 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-6vj2z" event={"ID":"bd8f5c2a-410f-40e4-9672-272f95aacea1","Type":"ContainerStarted","Data":"32800f6939c69c4b71459c6e006adf7aad48cc4081a4ac4bc4992c9888bfb0bd"} Feb 16 17:14:24 crc kubenswrapper[4870]: I0216 17:14:24.343515 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-6vj2z" podStartSLOduration=2.211153353 podStartE2EDuration="4.343500364s" podCreationTimestamp="2026-02-16 17:14:20 +0000 UTC" firstStartedPulling="2026-02-16 17:14:21.126078317 +0000 UTC m=+865.609542701" lastFinishedPulling="2026-02-16 17:14:23.258425328 +0000 UTC m=+867.741889712" observedRunningTime="2026-02-16 17:14:24.339394137 +0000 UTC m=+868.822858541" watchObservedRunningTime="2026-02-16 17:14:24.343500364 +0000 UTC m=+868.826964748" Feb 16 17:14:29 crc kubenswrapper[4870]: I0216 17:14:29.842008 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-klh69"] Feb 16 17:14:29 crc kubenswrapper[4870]: I0216 17:14:29.843510 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-klh69" Feb 16 17:14:29 crc kubenswrapper[4870]: I0216 17:14:29.855010 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-klh69"] Feb 16 17:14:29 crc kubenswrapper[4870]: I0216 17:14:29.860439 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-cx2dx" Feb 16 17:14:29 crc kubenswrapper[4870]: I0216 17:14:29.863862 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-s4fff"] Feb 16 17:14:29 crc kubenswrapper[4870]: I0216 17:14:29.864649 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-s4fff" Feb 16 17:14:29 crc kubenswrapper[4870]: I0216 17:14:29.868402 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 16 17:14:29 crc kubenswrapper[4870]: I0216 17:14:29.882150 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/699cb420-5e4c-42ea-841c-b368459f6a2e-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-s4fff\" (UID: \"699cb420-5e4c-42ea-841c-b368459f6a2e\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-s4fff" Feb 16 17:14:29 crc kubenswrapper[4870]: I0216 17:14:29.882229 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r9m9\" (UniqueName: \"kubernetes.io/projected/cb34d88d-4dbf-4253-8537-dda975a9985a-kube-api-access-9r9m9\") pod \"nmstate-metrics-58c85c668d-klh69\" (UID: \"cb34d88d-4dbf-4253-8537-dda975a9985a\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-klh69" Feb 16 17:14:29 crc kubenswrapper[4870]: I0216 17:14:29.882298 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nvhn\" (UniqueName: \"kubernetes.io/projected/699cb420-5e4c-42ea-841c-b368459f6a2e-kube-api-access-4nvhn\") pod \"nmstate-webhook-866bcb46dc-s4fff\" (UID: \"699cb420-5e4c-42ea-841c-b368459f6a2e\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-s4fff" Feb 16 17:14:29 crc kubenswrapper[4870]: I0216 17:14:29.887446 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-s4fff"] Feb 16 17:14:29 crc kubenswrapper[4870]: I0216 17:14:29.896305 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-5xr9s"] Feb 16 17:14:29 crc kubenswrapper[4870]: I0216 17:14:29.897461 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5xr9s" Feb 16 17:14:29 crc kubenswrapper[4870]: I0216 17:14:29.983863 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/16269c6b-867e-4277-91a2-52456a4424f2-ovs-socket\") pod \"nmstate-handler-5xr9s\" (UID: \"16269c6b-867e-4277-91a2-52456a4424f2\") " pod="openshift-nmstate/nmstate-handler-5xr9s" Feb 16 17:14:29 crc kubenswrapper[4870]: I0216 17:14:29.983916 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/16269c6b-867e-4277-91a2-52456a4424f2-nmstate-lock\") pod \"nmstate-handler-5xr9s\" (UID: \"16269c6b-867e-4277-91a2-52456a4424f2\") " pod="openshift-nmstate/nmstate-handler-5xr9s" Feb 16 17:14:29 crc kubenswrapper[4870]: I0216 17:14:29.983989 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/699cb420-5e4c-42ea-841c-b368459f6a2e-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-s4fff\" (UID: \"699cb420-5e4c-42ea-841c-b368459f6a2e\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-s4fff" Feb 16 17:14:29 crc kubenswrapper[4870]: I0216 17:14:29.984013 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r9m9\" (UniqueName: \"kubernetes.io/projected/cb34d88d-4dbf-4253-8537-dda975a9985a-kube-api-access-9r9m9\") pod \"nmstate-metrics-58c85c668d-klh69\" (UID: \"cb34d88d-4dbf-4253-8537-dda975a9985a\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-klh69" Feb 16 17:14:29 crc kubenswrapper[4870]: I0216 17:14:29.984235 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/16269c6b-867e-4277-91a2-52456a4424f2-dbus-socket\") pod \"nmstate-handler-5xr9s\" (UID: \"16269c6b-867e-4277-91a2-52456a4424f2\") " pod="openshift-nmstate/nmstate-handler-5xr9s" Feb 16 17:14:29 crc kubenswrapper[4870]: I0216 17:14:29.984258 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l52mv\" (UniqueName: \"kubernetes.io/projected/16269c6b-867e-4277-91a2-52456a4424f2-kube-api-access-l52mv\") pod \"nmstate-handler-5xr9s\" (UID: \"16269c6b-867e-4277-91a2-52456a4424f2\") " pod="openshift-nmstate/nmstate-handler-5xr9s" Feb 16 17:14:29 crc kubenswrapper[4870]: I0216 17:14:29.984286 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nvhn\" (UniqueName: \"kubernetes.io/projected/699cb420-5e4c-42ea-841c-b368459f6a2e-kube-api-access-4nvhn\") pod \"nmstate-webhook-866bcb46dc-s4fff\" (UID: \"699cb420-5e4c-42ea-841c-b368459f6a2e\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-s4fff" Feb 16 17:14:29 crc kubenswrapper[4870]: E0216 17:14:29.984344 4870 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 16 17:14:29 crc kubenswrapper[4870]: E0216 17:14:29.984494 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/699cb420-5e4c-42ea-841c-b368459f6a2e-tls-key-pair podName:699cb420-5e4c-42ea-841c-b368459f6a2e nodeName:}" failed. No retries permitted until 2026-02-16 17:14:30.48446259 +0000 UTC m=+874.967926974 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/699cb420-5e4c-42ea-841c-b368459f6a2e-tls-key-pair") pod "nmstate-webhook-866bcb46dc-s4fff" (UID: "699cb420-5e4c-42ea-841c-b368459f6a2e") : secret "openshift-nmstate-webhook" not found Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.007548 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nvhn\" (UniqueName: \"kubernetes.io/projected/699cb420-5e4c-42ea-841c-b368459f6a2e-kube-api-access-4nvhn\") pod \"nmstate-webhook-866bcb46dc-s4fff\" (UID: \"699cb420-5e4c-42ea-841c-b368459f6a2e\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-s4fff" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.008789 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r9m9\" (UniqueName: \"kubernetes.io/projected/cb34d88d-4dbf-4253-8537-dda975a9985a-kube-api-access-9r9m9\") pod \"nmstate-metrics-58c85c668d-klh69\" (UID: \"cb34d88d-4dbf-4253-8537-dda975a9985a\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-klh69" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.017997 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bhvfd"] Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.018747 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bhvfd" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.020756 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.021348 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-w8nk6" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.021652 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.030752 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bhvfd"] Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.086292 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/16269c6b-867e-4277-91a2-52456a4424f2-ovs-socket\") pod \"nmstate-handler-5xr9s\" (UID: \"16269c6b-867e-4277-91a2-52456a4424f2\") " pod="openshift-nmstate/nmstate-handler-5xr9s" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.086343 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/16269c6b-867e-4277-91a2-52456a4424f2-nmstate-lock\") pod \"nmstate-handler-5xr9s\" (UID: \"16269c6b-867e-4277-91a2-52456a4424f2\") " pod="openshift-nmstate/nmstate-handler-5xr9s" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.086392 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69nwg\" (UniqueName: \"kubernetes.io/projected/8d8293d9-58e9-4f2f-a81e-efbcdfe14d27-kube-api-access-69nwg\") pod \"nmstate-console-plugin-5c78fc5d65-bhvfd\" (UID: \"8d8293d9-58e9-4f2f-a81e-efbcdfe14d27\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bhvfd" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.086420 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8d8293d9-58e9-4f2f-a81e-efbcdfe14d27-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-bhvfd\" (UID: \"8d8293d9-58e9-4f2f-a81e-efbcdfe14d27\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bhvfd" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.086460 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/16269c6b-867e-4277-91a2-52456a4424f2-ovs-socket\") pod \"nmstate-handler-5xr9s\" (UID: \"16269c6b-867e-4277-91a2-52456a4424f2\") " pod="openshift-nmstate/nmstate-handler-5xr9s" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.086489 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/16269c6b-867e-4277-91a2-52456a4424f2-nmstate-lock\") pod \"nmstate-handler-5xr9s\" (UID: \"16269c6b-867e-4277-91a2-52456a4424f2\") " pod="openshift-nmstate/nmstate-handler-5xr9s" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.086638 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/16269c6b-867e-4277-91a2-52456a4424f2-dbus-socket\") pod \"nmstate-handler-5xr9s\" (UID: \"16269c6b-867e-4277-91a2-52456a4424f2\") " pod="openshift-nmstate/nmstate-handler-5xr9s" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.086676 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l52mv\" (UniqueName: \"kubernetes.io/projected/16269c6b-867e-4277-91a2-52456a4424f2-kube-api-access-l52mv\") pod \"nmstate-handler-5xr9s\" (UID: \"16269c6b-867e-4277-91a2-52456a4424f2\") " pod="openshift-nmstate/nmstate-handler-5xr9s" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.086748 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d8293d9-58e9-4f2f-a81e-efbcdfe14d27-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-bhvfd\" (UID: \"8d8293d9-58e9-4f2f-a81e-efbcdfe14d27\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bhvfd" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.086981 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/16269c6b-867e-4277-91a2-52456a4424f2-dbus-socket\") pod \"nmstate-handler-5xr9s\" (UID: \"16269c6b-867e-4277-91a2-52456a4424f2\") " pod="openshift-nmstate/nmstate-handler-5xr9s" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.101897 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l52mv\" (UniqueName: \"kubernetes.io/projected/16269c6b-867e-4277-91a2-52456a4424f2-kube-api-access-l52mv\") pod \"nmstate-handler-5xr9s\" (UID: \"16269c6b-867e-4277-91a2-52456a4424f2\") " pod="openshift-nmstate/nmstate-handler-5xr9s" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.164023 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-klh69" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.190302 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d8293d9-58e9-4f2f-a81e-efbcdfe14d27-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-bhvfd\" (UID: \"8d8293d9-58e9-4f2f-a81e-efbcdfe14d27\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bhvfd" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.190410 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69nwg\" (UniqueName: \"kubernetes.io/projected/8d8293d9-58e9-4f2f-a81e-efbcdfe14d27-kube-api-access-69nwg\") pod \"nmstate-console-plugin-5c78fc5d65-bhvfd\" (UID: \"8d8293d9-58e9-4f2f-a81e-efbcdfe14d27\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bhvfd" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.190446 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8d8293d9-58e9-4f2f-a81e-efbcdfe14d27-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-bhvfd\" (UID: \"8d8293d9-58e9-4f2f-a81e-efbcdfe14d27\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bhvfd" Feb 16 17:14:30 crc kubenswrapper[4870]: E0216 17:14:30.190521 4870 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 16 17:14:30 crc kubenswrapper[4870]: E0216 17:14:30.190618 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d8293d9-58e9-4f2f-a81e-efbcdfe14d27-plugin-serving-cert podName:8d8293d9-58e9-4f2f-a81e-efbcdfe14d27 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:30.690597618 +0000 UTC m=+875.174062002 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/8d8293d9-58e9-4f2f-a81e-efbcdfe14d27-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-bhvfd" (UID: "8d8293d9-58e9-4f2f-a81e-efbcdfe14d27") : secret "plugin-serving-cert" not found Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.191675 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8d8293d9-58e9-4f2f-a81e-efbcdfe14d27-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-bhvfd\" (UID: \"8d8293d9-58e9-4f2f-a81e-efbcdfe14d27\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bhvfd" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.225455 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5xr9s" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.227713 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69nwg\" (UniqueName: \"kubernetes.io/projected/8d8293d9-58e9-4f2f-a81e-efbcdfe14d27-kube-api-access-69nwg\") pod \"nmstate-console-plugin-5c78fc5d65-bhvfd\" (UID: \"8d8293d9-58e9-4f2f-a81e-efbcdfe14d27\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bhvfd" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.251627 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6b5869d457-khjj4"] Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.252530 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.272925 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b5869d457-khjj4"] Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.367113 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5xr9s" event={"ID":"16269c6b-867e-4277-91a2-52456a4424f2","Type":"ContainerStarted","Data":"677015a320622d2fd55010c26022eb346cdf6531b1ec157d8ad084c478844181"} Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.416831 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/336f228c-ae0a-45c4-be67-b37e6fe2e063-trusted-ca-bundle\") pod \"console-6b5869d457-khjj4\" (UID: \"336f228c-ae0a-45c4-be67-b37e6fe2e063\") " pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.417053 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/336f228c-ae0a-45c4-be67-b37e6fe2e063-service-ca\") pod \"console-6b5869d457-khjj4\" (UID: \"336f228c-ae0a-45c4-be67-b37e6fe2e063\") " pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.417102 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/336f228c-ae0a-45c4-be67-b37e6fe2e063-console-serving-cert\") pod \"console-6b5869d457-khjj4\" (UID: \"336f228c-ae0a-45c4-be67-b37e6fe2e063\") " pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.417199 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/336f228c-ae0a-45c4-be67-b37e6fe2e063-console-config\") pod \"console-6b5869d457-khjj4\" (UID: \"336f228c-ae0a-45c4-be67-b37e6fe2e063\") " pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.417265 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/336f228c-ae0a-45c4-be67-b37e6fe2e063-oauth-serving-cert\") pod \"console-6b5869d457-khjj4\" (UID: \"336f228c-ae0a-45c4-be67-b37e6fe2e063\") " pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.417288 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzmc6\" (UniqueName: \"kubernetes.io/projected/336f228c-ae0a-45c4-be67-b37e6fe2e063-kube-api-access-mzmc6\") pod \"console-6b5869d457-khjj4\" (UID: \"336f228c-ae0a-45c4-be67-b37e6fe2e063\") " pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.417440 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/336f228c-ae0a-45c4-be67-b37e6fe2e063-console-oauth-config\") pod \"console-6b5869d457-khjj4\" (UID: \"336f228c-ae0a-45c4-be67-b37e6fe2e063\") " pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.468635 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-klh69"] Feb 16 17:14:30 crc kubenswrapper[4870]: W0216 17:14:30.474902 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb34d88d_4dbf_4253_8537_dda975a9985a.slice/crio-cf9ac6ec597b32633f5dd69ac3894b3cc546541a3aed4132262b1b8c20bf9e90 WatchSource:0}: Error finding container cf9ac6ec597b32633f5dd69ac3894b3cc546541a3aed4132262b1b8c20bf9e90: Status 404 returned error can't find the container with id cf9ac6ec597b32633f5dd69ac3894b3cc546541a3aed4132262b1b8c20bf9e90 Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.518551 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/336f228c-ae0a-45c4-be67-b37e6fe2e063-service-ca\") pod \"console-6b5869d457-khjj4\" (UID: \"336f228c-ae0a-45c4-be67-b37e6fe2e063\") " pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.518613 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/336f228c-ae0a-45c4-be67-b37e6fe2e063-console-serving-cert\") pod \"console-6b5869d457-khjj4\" (UID: \"336f228c-ae0a-45c4-be67-b37e6fe2e063\") " pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.518653 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/336f228c-ae0a-45c4-be67-b37e6fe2e063-console-config\") pod \"console-6b5869d457-khjj4\" (UID: \"336f228c-ae0a-45c4-be67-b37e6fe2e063\") " pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.518685 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzmc6\" (UniqueName: \"kubernetes.io/projected/336f228c-ae0a-45c4-be67-b37e6fe2e063-kube-api-access-mzmc6\") pod \"console-6b5869d457-khjj4\" (UID: \"336f228c-ae0a-45c4-be67-b37e6fe2e063\") " pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.518704 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/336f228c-ae0a-45c4-be67-b37e6fe2e063-oauth-serving-cert\") pod \"console-6b5869d457-khjj4\" (UID: \"336f228c-ae0a-45c4-be67-b37e6fe2e063\") " pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.518747 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/699cb420-5e4c-42ea-841c-b368459f6a2e-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-s4fff\" (UID: \"699cb420-5e4c-42ea-841c-b368459f6a2e\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-s4fff" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.518771 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/336f228c-ae0a-45c4-be67-b37e6fe2e063-console-oauth-config\") pod \"console-6b5869d457-khjj4\" (UID: \"336f228c-ae0a-45c4-be67-b37e6fe2e063\") " pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.518822 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/336f228c-ae0a-45c4-be67-b37e6fe2e063-trusted-ca-bundle\") pod \"console-6b5869d457-khjj4\" (UID: \"336f228c-ae0a-45c4-be67-b37e6fe2e063\") " pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.519918 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/336f228c-ae0a-45c4-be67-b37e6fe2e063-service-ca\") pod \"console-6b5869d457-khjj4\" (UID: \"336f228c-ae0a-45c4-be67-b37e6fe2e063\") " pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.520171 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/336f228c-ae0a-45c4-be67-b37e6fe2e063-console-config\") pod \"console-6b5869d457-khjj4\" (UID: \"336f228c-ae0a-45c4-be67-b37e6fe2e063\") " pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.520320 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/336f228c-ae0a-45c4-be67-b37e6fe2e063-oauth-serving-cert\") pod \"console-6b5869d457-khjj4\" (UID: \"336f228c-ae0a-45c4-be67-b37e6fe2e063\") " pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.520443 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/336f228c-ae0a-45c4-be67-b37e6fe2e063-trusted-ca-bundle\") pod \"console-6b5869d457-khjj4\" (UID: \"336f228c-ae0a-45c4-be67-b37e6fe2e063\") " pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.525346 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/336f228c-ae0a-45c4-be67-b37e6fe2e063-console-serving-cert\") pod \"console-6b5869d457-khjj4\" (UID: \"336f228c-ae0a-45c4-be67-b37e6fe2e063\") " pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.525349 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/699cb420-5e4c-42ea-841c-b368459f6a2e-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-s4fff\" (UID: \"699cb420-5e4c-42ea-841c-b368459f6a2e\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-s4fff" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.525498 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/336f228c-ae0a-45c4-be67-b37e6fe2e063-console-oauth-config\") pod \"console-6b5869d457-khjj4\" (UID: \"336f228c-ae0a-45c4-be67-b37e6fe2e063\") " pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.533674 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzmc6\" (UniqueName: \"kubernetes.io/projected/336f228c-ae0a-45c4-be67-b37e6fe2e063-kube-api-access-mzmc6\") pod \"console-6b5869d457-khjj4\" (UID: \"336f228c-ae0a-45c4-be67-b37e6fe2e063\") " pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.631914 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.720799 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d8293d9-58e9-4f2f-a81e-efbcdfe14d27-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-bhvfd\" (UID: \"8d8293d9-58e9-4f2f-a81e-efbcdfe14d27\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bhvfd" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.726633 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d8293d9-58e9-4f2f-a81e-efbcdfe14d27-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-bhvfd\" (UID: \"8d8293d9-58e9-4f2f-a81e-efbcdfe14d27\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bhvfd" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.780320 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-s4fff" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.873825 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b5869d457-khjj4"] Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.960415 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bhvfd" Feb 16 17:14:30 crc kubenswrapper[4870]: I0216 17:14:30.992371 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-s4fff"] Feb 16 17:14:30 crc kubenswrapper[4870]: W0216 17:14:30.996594 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod699cb420_5e4c_42ea_841c_b368459f6a2e.slice/crio-9e76ab1fbb2ccc6090703ccb347d35968ab3be454d30318904ab909df67eb854 WatchSource:0}: Error finding container 9e76ab1fbb2ccc6090703ccb347d35968ab3be454d30318904ab909df67eb854: Status 404 returned error can't find the container with id 9e76ab1fbb2ccc6090703ccb347d35968ab3be454d30318904ab909df67eb854 Feb 16 17:14:31 crc kubenswrapper[4870]: I0216 17:14:31.195911 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bhvfd"] Feb 16 17:14:31 crc kubenswrapper[4870]: I0216 17:14:31.373972 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-s4fff" event={"ID":"699cb420-5e4c-42ea-841c-b368459f6a2e","Type":"ContainerStarted","Data":"9e76ab1fbb2ccc6090703ccb347d35968ab3be454d30318904ab909df67eb854"} Feb 16 17:14:31 crc kubenswrapper[4870]: I0216 17:14:31.375072 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-klh69" event={"ID":"cb34d88d-4dbf-4253-8537-dda975a9985a","Type":"ContainerStarted","Data":"cf9ac6ec597b32633f5dd69ac3894b3cc546541a3aed4132262b1b8c20bf9e90"} Feb 16 17:14:31 crc kubenswrapper[4870]: I0216 17:14:31.379274 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b5869d457-khjj4" event={"ID":"336f228c-ae0a-45c4-be67-b37e6fe2e063","Type":"ContainerStarted","Data":"bc56aa4182b1577119c918729871f53748d258c83d49d074d5ce17c89e75858c"} Feb 16 17:14:31 crc kubenswrapper[4870]: I0216 17:14:31.379426 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b5869d457-khjj4" event={"ID":"336f228c-ae0a-45c4-be67-b37e6fe2e063","Type":"ContainerStarted","Data":"f33c37a422d7b1a56b764584a09845c479daa6ea74925710c60d043b0e078414"} Feb 16 17:14:31 crc kubenswrapper[4870]: I0216 17:14:31.381243 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bhvfd" event={"ID":"8d8293d9-58e9-4f2f-a81e-efbcdfe14d27","Type":"ContainerStarted","Data":"bdf6169d25ea50ec3b4b3645ab1711a2324fa693a76e42b0523646acf13a8347"} Feb 16 17:14:31 crc kubenswrapper[4870]: I0216 17:14:31.397095 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6b5869d457-khjj4" podStartSLOduration=1.397070942 podStartE2EDuration="1.397070942s" podCreationTimestamp="2026-02-16 17:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:14:31.39418733 +0000 UTC m=+875.877651714" watchObservedRunningTime="2026-02-16 17:14:31.397070942 +0000 UTC m=+875.880535326" Feb 16 17:14:33 crc kubenswrapper[4870]: I0216 17:14:33.395196 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-klh69" event={"ID":"cb34d88d-4dbf-4253-8537-dda975a9985a","Type":"ContainerStarted","Data":"85aecec9d60fe63f6879b14e477a81aac1445e46d954a4c597770e7b50f8543d"} Feb 16 17:14:33 crc kubenswrapper[4870]: I0216 17:14:33.396792 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-s4fff" event={"ID":"699cb420-5e4c-42ea-841c-b368459f6a2e","Type":"ContainerStarted","Data":"96bbfb8e392ff6a43c622f8c1ab3bdb33dcbe13c3440f2d2df36196a557a946b"} Feb 16 17:14:33 crc kubenswrapper[4870]: I0216 17:14:33.396917 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-s4fff" Feb 16 17:14:33 crc kubenswrapper[4870]: I0216 17:14:33.398319 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5xr9s" event={"ID":"16269c6b-867e-4277-91a2-52456a4424f2","Type":"ContainerStarted","Data":"d69071d6871c999fbf480a0e494b48777cffb6f6d572db5c458de2cde1dc3626"} Feb 16 17:14:33 crc kubenswrapper[4870]: I0216 17:14:33.398452 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-5xr9s" Feb 16 17:14:33 crc kubenswrapper[4870]: I0216 17:14:33.424793 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-s4fff" podStartSLOduration=2.44673259 podStartE2EDuration="4.424778884s" podCreationTimestamp="2026-02-16 17:14:29 +0000 UTC" firstStartedPulling="2026-02-16 17:14:31.001749109 +0000 UTC m=+875.485213493" lastFinishedPulling="2026-02-16 17:14:32.979795403 +0000 UTC m=+877.463259787" observedRunningTime="2026-02-16 17:14:33.422671364 +0000 UTC m=+877.906135748" watchObservedRunningTime="2026-02-16 17:14:33.424778884 +0000 UTC m=+877.908243258" Feb 16 17:14:33 crc kubenswrapper[4870]: I0216 17:14:33.445138 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-5xr9s" podStartSLOduration=1.8309349 podStartE2EDuration="4.445120385s" podCreationTimestamp="2026-02-16 17:14:29 +0000 UTC" firstStartedPulling="2026-02-16 17:14:30.329729843 +0000 UTC m=+874.813194227" lastFinishedPulling="2026-02-16 17:14:32.943915328 +0000 UTC m=+877.427379712" observedRunningTime="2026-02-16 17:14:33.443639863 +0000 UTC m=+877.927104247" watchObservedRunningTime="2026-02-16 17:14:33.445120385 +0000 UTC m=+877.928584769" Feb 16 17:14:35 crc kubenswrapper[4870]: I0216 17:14:35.367044 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:14:35 crc kubenswrapper[4870]: I0216 17:14:35.367557 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:14:35 crc kubenswrapper[4870]: I0216 17:14:35.410848 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bhvfd" event={"ID":"8d8293d9-58e9-4f2f-a81e-efbcdfe14d27","Type":"ContainerStarted","Data":"b033ae090878a423d5bbf1cde8d7d8f6922c8aa50ccf6e4e8316955aac34e05f"} Feb 16 17:14:35 crc kubenswrapper[4870]: I0216 17:14:35.427145 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-bhvfd" podStartSLOduration=3.278425019 podStartE2EDuration="6.427083832s" podCreationTimestamp="2026-02-16 17:14:29 +0000 UTC" firstStartedPulling="2026-02-16 17:14:31.235310511 +0000 UTC m=+875.718774895" lastFinishedPulling="2026-02-16 17:14:34.383969324 +0000 UTC m=+878.867433708" observedRunningTime="2026-02-16 17:14:35.423929892 +0000 UTC m=+879.907394326" watchObservedRunningTime="2026-02-16 17:14:35.427083832 +0000 UTC m=+879.910548246" Feb 16 17:14:36 crc kubenswrapper[4870]: I0216 17:14:36.424678 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-klh69" event={"ID":"cb34d88d-4dbf-4253-8537-dda975a9985a","Type":"ContainerStarted","Data":"efe5422dd602ac85c9b25072a1f25b3fc354d1c72dbf2587eb0abe16784cf6ec"} Feb 16 17:14:36 crc kubenswrapper[4870]: I0216 17:14:36.463792 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-klh69" podStartSLOduration=2.482735368 podStartE2EDuration="7.463761794s" podCreationTimestamp="2026-02-16 17:14:29 +0000 UTC" firstStartedPulling="2026-02-16 17:14:30.477478093 +0000 UTC m=+874.960942477" lastFinishedPulling="2026-02-16 17:14:35.458504509 +0000 UTC m=+879.941968903" observedRunningTime="2026-02-16 17:14:36.454577202 +0000 UTC m=+880.938041656" watchObservedRunningTime="2026-02-16 17:14:36.463761794 +0000 UTC m=+880.947226218" Feb 16 17:14:40 crc kubenswrapper[4870]: I0216 17:14:40.258153 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-5xr9s" Feb 16 17:14:40 crc kubenswrapper[4870]: I0216 17:14:40.632581 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:40 crc kubenswrapper[4870]: I0216 17:14:40.632872 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:40 crc kubenswrapper[4870]: I0216 17:14:40.638340 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:41 crc kubenswrapper[4870]: I0216 17:14:41.462762 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6b5869d457-khjj4" Feb 16 17:14:41 crc kubenswrapper[4870]: I0216 17:14:41.517781 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-n96b6"] Feb 16 17:14:50 crc kubenswrapper[4870]: I0216 17:14:50.789876 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-s4fff" Feb 16 17:15:00 crc kubenswrapper[4870]: I0216 17:15:00.170795 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521035-jkj2v"] Feb 16 17:15:00 crc kubenswrapper[4870]: I0216 17:15:00.172451 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-jkj2v" Feb 16 17:15:00 crc kubenswrapper[4870]: I0216 17:15:00.177246 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 17:15:00 crc kubenswrapper[4870]: I0216 17:15:00.177698 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 17:15:00 crc kubenswrapper[4870]: I0216 17:15:00.181842 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521035-jkj2v"] Feb 16 17:15:00 crc kubenswrapper[4870]: I0216 17:15:00.251826 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f-secret-volume\") pod \"collect-profiles-29521035-jkj2v\" (UID: \"d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-jkj2v" Feb 16 17:15:00 crc kubenswrapper[4870]: I0216 17:15:00.251878 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f-config-volume\") pod \"collect-profiles-29521035-jkj2v\" (UID: \"d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-jkj2v" Feb 16 17:15:00 crc kubenswrapper[4870]: I0216 17:15:00.251915 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxbwj\" (UniqueName: \"kubernetes.io/projected/d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f-kube-api-access-rxbwj\") pod \"collect-profiles-29521035-jkj2v\" (UID: \"d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-jkj2v" Feb 16 17:15:00 crc kubenswrapper[4870]: I0216 17:15:00.353205 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f-secret-volume\") pod \"collect-profiles-29521035-jkj2v\" (UID: \"d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-jkj2v" Feb 16 17:15:00 crc kubenswrapper[4870]: I0216 17:15:00.353248 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f-config-volume\") pod \"collect-profiles-29521035-jkj2v\" (UID: \"d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-jkj2v" Feb 16 17:15:00 crc kubenswrapper[4870]: I0216 17:15:00.353285 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbwj\" (UniqueName: \"kubernetes.io/projected/d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f-kube-api-access-rxbwj\") pod \"collect-profiles-29521035-jkj2v\" (UID: \"d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-jkj2v" Feb 16 17:15:00 crc kubenswrapper[4870]: I0216 17:15:00.359309 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f-secret-volume\") pod \"collect-profiles-29521035-jkj2v\" (UID: \"d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-jkj2v" Feb 16 17:15:00 crc kubenswrapper[4870]: I0216 17:15:00.360199 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f-config-volume\") pod \"collect-profiles-29521035-jkj2v\" (UID: \"d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-jkj2v" Feb 16 17:15:00 crc kubenswrapper[4870]: I0216 17:15:00.374330 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxbwj\" (UniqueName: \"kubernetes.io/projected/d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f-kube-api-access-rxbwj\") pod \"collect-profiles-29521035-jkj2v\" (UID: \"d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-jkj2v" Feb 16 17:15:00 crc kubenswrapper[4870]: I0216 17:15:00.499279 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-jkj2v" Feb 16 17:15:00 crc kubenswrapper[4870]: I0216 17:15:00.742595 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521035-jkj2v"] Feb 16 17:15:01 crc kubenswrapper[4870]: I0216 17:15:01.601990 4870 generic.go:334] "Generic (PLEG): container finished" podID="d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f" containerID="af34b6f84daa57e566e6b73b29c72c7a8f30a6eb806a0230443abd6568b11500" exitCode=0 Feb 16 17:15:01 crc kubenswrapper[4870]: I0216 17:15:01.602527 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-jkj2v" event={"ID":"d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f","Type":"ContainerDied","Data":"af34b6f84daa57e566e6b73b29c72c7a8f30a6eb806a0230443abd6568b11500"} Feb 16 17:15:01 crc kubenswrapper[4870]: I0216 17:15:01.602598 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-jkj2v" event={"ID":"d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f","Type":"ContainerStarted","Data":"1b5e3c30be6664c91c1ef45b0887125132f8c9042ef54c4b90ea776c89a12ddc"} Feb 16 17:15:02 crc kubenswrapper[4870]: I0216 17:15:02.836570 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-jkj2v" Feb 16 17:15:02 crc kubenswrapper[4870]: I0216 17:15:02.994468 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f-config-volume\") pod \"d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f\" (UID: \"d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f\") " Feb 16 17:15:02 crc kubenswrapper[4870]: I0216 17:15:02.994566 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f-secret-volume\") pod \"d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f\" (UID: \"d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f\") " Feb 16 17:15:02 crc kubenswrapper[4870]: I0216 17:15:02.994600 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxbwj\" (UniqueName: \"kubernetes.io/projected/d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f-kube-api-access-rxbwj\") pod \"d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f\" (UID: \"d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f\") " Feb 16 17:15:02 crc kubenswrapper[4870]: I0216 17:15:02.996475 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f-config-volume" (OuterVolumeSpecName: "config-volume") pod "d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f" (UID: "d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:15:03 crc kubenswrapper[4870]: I0216 17:15:03.001041 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f" (UID: "d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:15:03 crc kubenswrapper[4870]: I0216 17:15:03.001195 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f-kube-api-access-rxbwj" (OuterVolumeSpecName: "kube-api-access-rxbwj") pod "d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f" (UID: "d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f"). InnerVolumeSpecName "kube-api-access-rxbwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:15:03 crc kubenswrapper[4870]: I0216 17:15:03.095584 4870 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:03 crc kubenswrapper[4870]: I0216 17:15:03.095620 4870 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:03 crc kubenswrapper[4870]: I0216 17:15:03.095631 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxbwj\" (UniqueName: \"kubernetes.io/projected/d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f-kube-api-access-rxbwj\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:03 crc kubenswrapper[4870]: I0216 17:15:03.622572 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-jkj2v" event={"ID":"d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f","Type":"ContainerDied","Data":"1b5e3c30be6664c91c1ef45b0887125132f8c9042ef54c4b90ea776c89a12ddc"} Feb 16 17:15:03 crc kubenswrapper[4870]: I0216 17:15:03.622614 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b5e3c30be6664c91c1ef45b0887125132f8c9042ef54c4b90ea776c89a12ddc" Feb 16 17:15:03 crc kubenswrapper[4870]: I0216 17:15:03.622622 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-jkj2v" Feb 16 17:15:05 crc kubenswrapper[4870]: I0216 17:15:05.366656 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:15:05 crc kubenswrapper[4870]: I0216 17:15:05.367039 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:15:06 crc kubenswrapper[4870]: I0216 17:15:06.596572 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-n96b6" podUID="ed053e72-4999-4b5d-a9f3-c58b92280c8c" containerName="console" containerID="cri-o://67e5badcc69d4c67e29b3dc3fca67b1b769d32f8d9bd04e0c5b18896433199c9" gracePeriod=15 Feb 16 17:15:06 crc kubenswrapper[4870]: I0216 17:15:06.696163 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl"] Feb 16 17:15:06 crc kubenswrapper[4870]: E0216 17:15:06.696748 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f" containerName="collect-profiles" Feb 16 17:15:06 crc kubenswrapper[4870]: I0216 17:15:06.696888 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f" containerName="collect-profiles" Feb 16 17:15:06 crc kubenswrapper[4870]: I0216 17:15:06.697203 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f" containerName="collect-profiles" Feb 16 17:15:06 crc kubenswrapper[4870]: I0216 17:15:06.698520 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl" Feb 16 17:15:06 crc kubenswrapper[4870]: I0216 17:15:06.701000 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 17:15:06 crc kubenswrapper[4870]: I0216 17:15:06.719017 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl"] Feb 16 17:15:06 crc kubenswrapper[4870]: I0216 17:15:06.844193 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60291662-7eb9-46bf-afbc-e75937b19398-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl\" (UID: \"60291662-7eb9-46bf-afbc-e75937b19398\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl" Feb 16 17:15:06 crc kubenswrapper[4870]: I0216 17:15:06.844413 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60291662-7eb9-46bf-afbc-e75937b19398-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl\" (UID: \"60291662-7eb9-46bf-afbc-e75937b19398\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl" Feb 16 17:15:06 crc kubenswrapper[4870]: I0216 17:15:06.844559 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s24l\" (UniqueName: \"kubernetes.io/projected/60291662-7eb9-46bf-afbc-e75937b19398-kube-api-access-6s24l\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl\" (UID: \"60291662-7eb9-46bf-afbc-e75937b19398\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl" Feb 16 17:15:06 crc kubenswrapper[4870]: I0216 17:15:06.945755 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s24l\" (UniqueName: \"kubernetes.io/projected/60291662-7eb9-46bf-afbc-e75937b19398-kube-api-access-6s24l\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl\" (UID: \"60291662-7eb9-46bf-afbc-e75937b19398\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl" Feb 16 17:15:06 crc kubenswrapper[4870]: I0216 17:15:06.945870 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60291662-7eb9-46bf-afbc-e75937b19398-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl\" (UID: \"60291662-7eb9-46bf-afbc-e75937b19398\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl" Feb 16 17:15:06 crc kubenswrapper[4870]: I0216 17:15:06.945899 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60291662-7eb9-46bf-afbc-e75937b19398-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl\" (UID: \"60291662-7eb9-46bf-afbc-e75937b19398\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl" Feb 16 17:15:06 crc kubenswrapper[4870]: I0216 17:15:06.946469 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60291662-7eb9-46bf-afbc-e75937b19398-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl\" (UID: \"60291662-7eb9-46bf-afbc-e75937b19398\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl" Feb 16 17:15:06 crc kubenswrapper[4870]: I0216 17:15:06.946700 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60291662-7eb9-46bf-afbc-e75937b19398-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl\" (UID: \"60291662-7eb9-46bf-afbc-e75937b19398\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl" Feb 16 17:15:06 crc kubenswrapper[4870]: I0216 17:15:06.954778 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-n96b6_ed053e72-4999-4b5d-a9f3-c58b92280c8c/console/0.log" Feb 16 17:15:06 crc kubenswrapper[4870]: I0216 17:15:06.954881 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:15:06 crc kubenswrapper[4870]: I0216 17:15:06.970412 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s24l\" (UniqueName: \"kubernetes.io/projected/60291662-7eb9-46bf-afbc-e75937b19398-kube-api-access-6s24l\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl\" (UID: \"60291662-7eb9-46bf-afbc-e75937b19398\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl" Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.034203 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl" Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.046413 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-service-ca\") pod \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.046457 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed053e72-4999-4b5d-a9f3-c58b92280c8c-console-oauth-config\") pod \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.046544 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-oauth-serving-cert\") pod \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.046586 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed053e72-4999-4b5d-a9f3-c58b92280c8c-console-serving-cert\") pod \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.046638 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hmnw\" (UniqueName: \"kubernetes.io/projected/ed053e72-4999-4b5d-a9f3-c58b92280c8c-kube-api-access-8hmnw\") pod \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.046663 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-console-config\") pod \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.046685 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-trusted-ca-bundle\") pod \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\" (UID: \"ed053e72-4999-4b5d-a9f3-c58b92280c8c\") " Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.047218 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-service-ca" (OuterVolumeSpecName: "service-ca") pod "ed053e72-4999-4b5d-a9f3-c58b92280c8c" (UID: "ed053e72-4999-4b5d-a9f3-c58b92280c8c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.047584 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ed053e72-4999-4b5d-a9f3-c58b92280c8c" (UID: "ed053e72-4999-4b5d-a9f3-c58b92280c8c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.048224 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-console-config" (OuterVolumeSpecName: "console-config") pod "ed053e72-4999-4b5d-a9f3-c58b92280c8c" (UID: "ed053e72-4999-4b5d-a9f3-c58b92280c8c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.049332 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ed053e72-4999-4b5d-a9f3-c58b92280c8c" (UID: "ed053e72-4999-4b5d-a9f3-c58b92280c8c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.050497 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed053e72-4999-4b5d-a9f3-c58b92280c8c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ed053e72-4999-4b5d-a9f3-c58b92280c8c" (UID: "ed053e72-4999-4b5d-a9f3-c58b92280c8c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.052628 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed053e72-4999-4b5d-a9f3-c58b92280c8c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ed053e72-4999-4b5d-a9f3-c58b92280c8c" (UID: "ed053e72-4999-4b5d-a9f3-c58b92280c8c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.059832 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed053e72-4999-4b5d-a9f3-c58b92280c8c-kube-api-access-8hmnw" (OuterVolumeSpecName: "kube-api-access-8hmnw") pod "ed053e72-4999-4b5d-a9f3-c58b92280c8c" (UID: "ed053e72-4999-4b5d-a9f3-c58b92280c8c"). InnerVolumeSpecName "kube-api-access-8hmnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.148921 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hmnw\" (UniqueName: \"kubernetes.io/projected/ed053e72-4999-4b5d-a9f3-c58b92280c8c-kube-api-access-8hmnw\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.149037 4870 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.149056 4870 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.149074 4870 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.149090 4870 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed053e72-4999-4b5d-a9f3-c58b92280c8c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.149106 4870 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed053e72-4999-4b5d-a9f3-c58b92280c8c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.149122 4870 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed053e72-4999-4b5d-a9f3-c58b92280c8c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.313779 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl"] Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.645670 4870 generic.go:334] "Generic (PLEG): container finished" podID="60291662-7eb9-46bf-afbc-e75937b19398" containerID="c79d8329b05667018690402197369955f82c3ed03ecf7a0cac3800ba470d340e" exitCode=0 Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.645718 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl" event={"ID":"60291662-7eb9-46bf-afbc-e75937b19398","Type":"ContainerDied","Data":"c79d8329b05667018690402197369955f82c3ed03ecf7a0cac3800ba470d340e"} Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.645789 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl" event={"ID":"60291662-7eb9-46bf-afbc-e75937b19398","Type":"ContainerStarted","Data":"a402d374b86a650adb679ce0139d800992c89cbdf3df9957d8464163ecee6829"} Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.648832 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-n96b6_ed053e72-4999-4b5d-a9f3-c58b92280c8c/console/0.log" Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.648933 4870 generic.go:334] "Generic (PLEG): container finished" podID="ed053e72-4999-4b5d-a9f3-c58b92280c8c" containerID="67e5badcc69d4c67e29b3dc3fca67b1b769d32f8d9bd04e0c5b18896433199c9" exitCode=2 Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.649000 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n96b6" Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.649028 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n96b6" event={"ID":"ed053e72-4999-4b5d-a9f3-c58b92280c8c","Type":"ContainerDied","Data":"67e5badcc69d4c67e29b3dc3fca67b1b769d32f8d9bd04e0c5b18896433199c9"} Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.649083 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n96b6" event={"ID":"ed053e72-4999-4b5d-a9f3-c58b92280c8c","Type":"ContainerDied","Data":"f55ea301b80af414714f0bc3f5d5574283231f779bb9a6a279a21f2f2b6860e1"} Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.649118 4870 scope.go:117] "RemoveContainer" containerID="67e5badcc69d4c67e29b3dc3fca67b1b769d32f8d9bd04e0c5b18896433199c9" Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.674040 4870 scope.go:117] "RemoveContainer" containerID="67e5badcc69d4c67e29b3dc3fca67b1b769d32f8d9bd04e0c5b18896433199c9" Feb 16 17:15:07 crc kubenswrapper[4870]: E0216 17:15:07.674751 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67e5badcc69d4c67e29b3dc3fca67b1b769d32f8d9bd04e0c5b18896433199c9\": container with ID starting with 67e5badcc69d4c67e29b3dc3fca67b1b769d32f8d9bd04e0c5b18896433199c9 not found: ID does not exist" containerID="67e5badcc69d4c67e29b3dc3fca67b1b769d32f8d9bd04e0c5b18896433199c9" Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.674813 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67e5badcc69d4c67e29b3dc3fca67b1b769d32f8d9bd04e0c5b18896433199c9"} err="failed to get container status \"67e5badcc69d4c67e29b3dc3fca67b1b769d32f8d9bd04e0c5b18896433199c9\": rpc error: code = NotFound desc = could not find container \"67e5badcc69d4c67e29b3dc3fca67b1b769d32f8d9bd04e0c5b18896433199c9\": container with ID starting with 67e5badcc69d4c67e29b3dc3fca67b1b769d32f8d9bd04e0c5b18896433199c9 not found: ID does not exist" Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.681656 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-n96b6"] Feb 16 17:15:07 crc kubenswrapper[4870]: I0216 17:15:07.685382 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-n96b6"] Feb 16 17:15:08 crc kubenswrapper[4870]: I0216 17:15:08.232751 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed053e72-4999-4b5d-a9f3-c58b92280c8c" path="/var/lib/kubelet/pods/ed053e72-4999-4b5d-a9f3-c58b92280c8c/volumes" Feb 16 17:15:09 crc kubenswrapper[4870]: I0216 17:15:09.665398 4870 generic.go:334] "Generic (PLEG): container finished" podID="60291662-7eb9-46bf-afbc-e75937b19398" containerID="90a36e08905b36f2fbe005b928d2b3516f53f4452ed29c60060e5f80d097044f" exitCode=0 Feb 16 17:15:09 crc kubenswrapper[4870]: I0216 17:15:09.665453 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl" event={"ID":"60291662-7eb9-46bf-afbc-e75937b19398","Type":"ContainerDied","Data":"90a36e08905b36f2fbe005b928d2b3516f53f4452ed29c60060e5f80d097044f"} Feb 16 17:15:10 crc kubenswrapper[4870]: I0216 17:15:10.687417 4870 generic.go:334] "Generic (PLEG): container finished" podID="60291662-7eb9-46bf-afbc-e75937b19398" containerID="f422598c57cc135ad31db5679705870a74d0c873c16cd66afbfe85e270f8440a" exitCode=0 Feb 16 17:15:10 crc kubenswrapper[4870]: I0216 17:15:10.687468 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl" event={"ID":"60291662-7eb9-46bf-afbc-e75937b19398","Type":"ContainerDied","Data":"f422598c57cc135ad31db5679705870a74d0c873c16cd66afbfe85e270f8440a"} Feb 16 17:15:11 crc kubenswrapper[4870]: I0216 17:15:11.954400 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl" Feb 16 17:15:12 crc kubenswrapper[4870]: I0216 17:15:12.024774 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60291662-7eb9-46bf-afbc-e75937b19398-bundle\") pod \"60291662-7eb9-46bf-afbc-e75937b19398\" (UID: \"60291662-7eb9-46bf-afbc-e75937b19398\") " Feb 16 17:15:12 crc kubenswrapper[4870]: I0216 17:15:12.024828 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s24l\" (UniqueName: \"kubernetes.io/projected/60291662-7eb9-46bf-afbc-e75937b19398-kube-api-access-6s24l\") pod \"60291662-7eb9-46bf-afbc-e75937b19398\" (UID: \"60291662-7eb9-46bf-afbc-e75937b19398\") " Feb 16 17:15:12 crc kubenswrapper[4870]: I0216 17:15:12.024867 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60291662-7eb9-46bf-afbc-e75937b19398-util\") pod \"60291662-7eb9-46bf-afbc-e75937b19398\" (UID: \"60291662-7eb9-46bf-afbc-e75937b19398\") " Feb 16 17:15:12 crc kubenswrapper[4870]: I0216 17:15:12.025908 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60291662-7eb9-46bf-afbc-e75937b19398-bundle" (OuterVolumeSpecName: "bundle") pod "60291662-7eb9-46bf-afbc-e75937b19398" (UID: "60291662-7eb9-46bf-afbc-e75937b19398"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:15:12 crc kubenswrapper[4870]: I0216 17:15:12.028618 4870 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60291662-7eb9-46bf-afbc-e75937b19398-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:12 crc kubenswrapper[4870]: I0216 17:15:12.031694 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60291662-7eb9-46bf-afbc-e75937b19398-kube-api-access-6s24l" (OuterVolumeSpecName: "kube-api-access-6s24l") pod "60291662-7eb9-46bf-afbc-e75937b19398" (UID: "60291662-7eb9-46bf-afbc-e75937b19398"). InnerVolumeSpecName "kube-api-access-6s24l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:15:12 crc kubenswrapper[4870]: I0216 17:15:12.038446 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60291662-7eb9-46bf-afbc-e75937b19398-util" (OuterVolumeSpecName: "util") pod "60291662-7eb9-46bf-afbc-e75937b19398" (UID: "60291662-7eb9-46bf-afbc-e75937b19398"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:15:12 crc kubenswrapper[4870]: I0216 17:15:12.130312 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s24l\" (UniqueName: \"kubernetes.io/projected/60291662-7eb9-46bf-afbc-e75937b19398-kube-api-access-6s24l\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:12 crc kubenswrapper[4870]: I0216 17:15:12.130349 4870 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60291662-7eb9-46bf-afbc-e75937b19398-util\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:12 crc kubenswrapper[4870]: I0216 17:15:12.700793 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl" event={"ID":"60291662-7eb9-46bf-afbc-e75937b19398","Type":"ContainerDied","Data":"a402d374b86a650adb679ce0139d800992c89cbdf3df9957d8464163ecee6829"} Feb 16 17:15:12 crc kubenswrapper[4870]: I0216 17:15:12.700827 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a402d374b86a650adb679ce0139d800992c89cbdf3df9957d8464163ecee6829" Feb 16 17:15:12 crc kubenswrapper[4870]: I0216 17:15:12.700869 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl" Feb 16 17:15:22 crc kubenswrapper[4870]: I0216 17:15:22.947415 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-dfd88577c-t9fpt"] Feb 16 17:15:22 crc kubenswrapper[4870]: E0216 17:15:22.948292 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed053e72-4999-4b5d-a9f3-c58b92280c8c" containerName="console" Feb 16 17:15:22 crc kubenswrapper[4870]: I0216 17:15:22.948310 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed053e72-4999-4b5d-a9f3-c58b92280c8c" containerName="console" Feb 16 17:15:22 crc kubenswrapper[4870]: E0216 17:15:22.948326 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60291662-7eb9-46bf-afbc-e75937b19398" containerName="extract" Feb 16 17:15:22 crc kubenswrapper[4870]: I0216 17:15:22.948333 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="60291662-7eb9-46bf-afbc-e75937b19398" containerName="extract" Feb 16 17:15:22 crc kubenswrapper[4870]: E0216 17:15:22.948345 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60291662-7eb9-46bf-afbc-e75937b19398" containerName="util" Feb 16 17:15:22 crc kubenswrapper[4870]: I0216 17:15:22.948352 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="60291662-7eb9-46bf-afbc-e75937b19398" containerName="util" Feb 16 17:15:22 crc kubenswrapper[4870]: E0216 17:15:22.948360 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60291662-7eb9-46bf-afbc-e75937b19398" containerName="pull" Feb 16 17:15:22 crc kubenswrapper[4870]: I0216 17:15:22.948367 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="60291662-7eb9-46bf-afbc-e75937b19398" containerName="pull" Feb 16 17:15:22 crc kubenswrapper[4870]: I0216 17:15:22.948489 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed053e72-4999-4b5d-a9f3-c58b92280c8c" containerName="console" Feb 16 17:15:22 crc kubenswrapper[4870]: I0216 17:15:22.948503 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="60291662-7eb9-46bf-afbc-e75937b19398" containerName="extract" Feb 16 17:15:22 crc kubenswrapper[4870]: I0216 17:15:22.949026 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-dfd88577c-t9fpt" Feb 16 17:15:22 crc kubenswrapper[4870]: I0216 17:15:22.952753 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 16 17:15:22 crc kubenswrapper[4870]: I0216 17:15:22.952770 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 16 17:15:22 crc kubenswrapper[4870]: I0216 17:15:22.952815 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-z9274" Feb 16 17:15:22 crc kubenswrapper[4870]: I0216 17:15:22.952895 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 16 17:15:22 crc kubenswrapper[4870]: I0216 17:15:22.952914 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 16 17:15:22 crc kubenswrapper[4870]: I0216 17:15:22.971685 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-dfd88577c-t9fpt"] Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.030742 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j9h6z"] Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.031848 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j9h6z" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.047904 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j9h6z"] Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.086865 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8005c88f-1465-4e77-bcd5-b58fe22b8055-webhook-cert\") pod \"metallb-operator-controller-manager-dfd88577c-t9fpt\" (UID: \"8005c88f-1465-4e77-bcd5-b58fe22b8055\") " pod="metallb-system/metallb-operator-controller-manager-dfd88577c-t9fpt" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.086963 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8005c88f-1465-4e77-bcd5-b58fe22b8055-apiservice-cert\") pod \"metallb-operator-controller-manager-dfd88577c-t9fpt\" (UID: \"8005c88f-1465-4e77-bcd5-b58fe22b8055\") " pod="metallb-system/metallb-operator-controller-manager-dfd88577c-t9fpt" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.087001 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z85m\" (UniqueName: \"kubernetes.io/projected/8005c88f-1465-4e77-bcd5-b58fe22b8055-kube-api-access-8z85m\") pod \"metallb-operator-controller-manager-dfd88577c-t9fpt\" (UID: \"8005c88f-1465-4e77-bcd5-b58fe22b8055\") " pod="metallb-system/metallb-operator-controller-manager-dfd88577c-t9fpt" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.188649 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b368e9d0-b052-4739-9091-f48ba8e4d65d-catalog-content\") pod \"community-operators-j9h6z\" (UID: \"b368e9d0-b052-4739-9091-f48ba8e4d65d\") " pod="openshift-marketplace/community-operators-j9h6z" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.188696 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvvkf\" (UniqueName: \"kubernetes.io/projected/b368e9d0-b052-4739-9091-f48ba8e4d65d-kube-api-access-gvvkf\") pod \"community-operators-j9h6z\" (UID: \"b368e9d0-b052-4739-9091-f48ba8e4d65d\") " pod="openshift-marketplace/community-operators-j9h6z" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.188932 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b368e9d0-b052-4739-9091-f48ba8e4d65d-utilities\") pod \"community-operators-j9h6z\" (UID: \"b368e9d0-b052-4739-9091-f48ba8e4d65d\") " pod="openshift-marketplace/community-operators-j9h6z" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.189001 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8005c88f-1465-4e77-bcd5-b58fe22b8055-webhook-cert\") pod \"metallb-operator-controller-manager-dfd88577c-t9fpt\" (UID: \"8005c88f-1465-4e77-bcd5-b58fe22b8055\") " pod="metallb-system/metallb-operator-controller-manager-dfd88577c-t9fpt" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.189101 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8005c88f-1465-4e77-bcd5-b58fe22b8055-apiservice-cert\") pod \"metallb-operator-controller-manager-dfd88577c-t9fpt\" (UID: \"8005c88f-1465-4e77-bcd5-b58fe22b8055\") " pod="metallb-system/metallb-operator-controller-manager-dfd88577c-t9fpt" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.189143 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z85m\" (UniqueName: \"kubernetes.io/projected/8005c88f-1465-4e77-bcd5-b58fe22b8055-kube-api-access-8z85m\") pod \"metallb-operator-controller-manager-dfd88577c-t9fpt\" (UID: \"8005c88f-1465-4e77-bcd5-b58fe22b8055\") " pod="metallb-system/metallb-operator-controller-manager-dfd88577c-t9fpt" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.198836 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8005c88f-1465-4e77-bcd5-b58fe22b8055-webhook-cert\") pod \"metallb-operator-controller-manager-dfd88577c-t9fpt\" (UID: \"8005c88f-1465-4e77-bcd5-b58fe22b8055\") " pod="metallb-system/metallb-operator-controller-manager-dfd88577c-t9fpt" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.198879 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8005c88f-1465-4e77-bcd5-b58fe22b8055-apiservice-cert\") pod \"metallb-operator-controller-manager-dfd88577c-t9fpt\" (UID: \"8005c88f-1465-4e77-bcd5-b58fe22b8055\") " pod="metallb-system/metallb-operator-controller-manager-dfd88577c-t9fpt" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.218580 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z85m\" (UniqueName: \"kubernetes.io/projected/8005c88f-1465-4e77-bcd5-b58fe22b8055-kube-api-access-8z85m\") pod \"metallb-operator-controller-manager-dfd88577c-t9fpt\" (UID: \"8005c88f-1465-4e77-bcd5-b58fe22b8055\") " pod="metallb-system/metallb-operator-controller-manager-dfd88577c-t9fpt" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.268298 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b886dc755-4pxj2"] Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.269148 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-dfd88577c-t9fpt" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.269562 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b886dc755-4pxj2" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.273105 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.273110 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.273244 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-nww7f" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.290053 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b368e9d0-b052-4739-9091-f48ba8e4d65d-catalog-content\") pod \"community-operators-j9h6z\" (UID: \"b368e9d0-b052-4739-9091-f48ba8e4d65d\") " pod="openshift-marketplace/community-operators-j9h6z" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.290123 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvvkf\" (UniqueName: \"kubernetes.io/projected/b368e9d0-b052-4739-9091-f48ba8e4d65d-kube-api-access-gvvkf\") pod \"community-operators-j9h6z\" (UID: \"b368e9d0-b052-4739-9091-f48ba8e4d65d\") " pod="openshift-marketplace/community-operators-j9h6z" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.290296 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b368e9d0-b052-4739-9091-f48ba8e4d65d-utilities\") pod \"community-operators-j9h6z\" (UID: \"b368e9d0-b052-4739-9091-f48ba8e4d65d\") " pod="openshift-marketplace/community-operators-j9h6z" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.290810 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b368e9d0-b052-4739-9091-f48ba8e4d65d-catalog-content\") pod \"community-operators-j9h6z\" (UID: \"b368e9d0-b052-4739-9091-f48ba8e4d65d\") " pod="openshift-marketplace/community-operators-j9h6z" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.290837 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b368e9d0-b052-4739-9091-f48ba8e4d65d-utilities\") pod \"community-operators-j9h6z\" (UID: \"b368e9d0-b052-4739-9091-f48ba8e4d65d\") " pod="openshift-marketplace/community-operators-j9h6z" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.314203 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b886dc755-4pxj2"] Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.317781 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvvkf\" (UniqueName: \"kubernetes.io/projected/b368e9d0-b052-4739-9091-f48ba8e4d65d-kube-api-access-gvvkf\") pod \"community-operators-j9h6z\" (UID: \"b368e9d0-b052-4739-9091-f48ba8e4d65d\") " pod="openshift-marketplace/community-operators-j9h6z" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.347785 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j9h6z" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.392571 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4xn7\" (UniqueName: \"kubernetes.io/projected/c7148109-37fd-4199-9cca-df3f97d2d070-kube-api-access-k4xn7\") pod \"metallb-operator-webhook-server-6b886dc755-4pxj2\" (UID: \"c7148109-37fd-4199-9cca-df3f97d2d070\") " pod="metallb-system/metallb-operator-webhook-server-6b886dc755-4pxj2" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.392694 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c7148109-37fd-4199-9cca-df3f97d2d070-apiservice-cert\") pod \"metallb-operator-webhook-server-6b886dc755-4pxj2\" (UID: \"c7148109-37fd-4199-9cca-df3f97d2d070\") " pod="metallb-system/metallb-operator-webhook-server-6b886dc755-4pxj2" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.392739 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c7148109-37fd-4199-9cca-df3f97d2d070-webhook-cert\") pod \"metallb-operator-webhook-server-6b886dc755-4pxj2\" (UID: \"c7148109-37fd-4199-9cca-df3f97d2d070\") " pod="metallb-system/metallb-operator-webhook-server-6b886dc755-4pxj2" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.493754 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4xn7\" (UniqueName: \"kubernetes.io/projected/c7148109-37fd-4199-9cca-df3f97d2d070-kube-api-access-k4xn7\") pod \"metallb-operator-webhook-server-6b886dc755-4pxj2\" (UID: \"c7148109-37fd-4199-9cca-df3f97d2d070\") " pod="metallb-system/metallb-operator-webhook-server-6b886dc755-4pxj2" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.493818 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c7148109-37fd-4199-9cca-df3f97d2d070-apiservice-cert\") pod \"metallb-operator-webhook-server-6b886dc755-4pxj2\" (UID: \"c7148109-37fd-4199-9cca-df3f97d2d070\") " pod="metallb-system/metallb-operator-webhook-server-6b886dc755-4pxj2" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.493838 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c7148109-37fd-4199-9cca-df3f97d2d070-webhook-cert\") pod \"metallb-operator-webhook-server-6b886dc755-4pxj2\" (UID: \"c7148109-37fd-4199-9cca-df3f97d2d070\") " pod="metallb-system/metallb-operator-webhook-server-6b886dc755-4pxj2" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.515341 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c7148109-37fd-4199-9cca-df3f97d2d070-webhook-cert\") pod \"metallb-operator-webhook-server-6b886dc755-4pxj2\" (UID: \"c7148109-37fd-4199-9cca-df3f97d2d070\") " pod="metallb-system/metallb-operator-webhook-server-6b886dc755-4pxj2" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.516914 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c7148109-37fd-4199-9cca-df3f97d2d070-apiservice-cert\") pod \"metallb-operator-webhook-server-6b886dc755-4pxj2\" (UID: \"c7148109-37fd-4199-9cca-df3f97d2d070\") " pod="metallb-system/metallb-operator-webhook-server-6b886dc755-4pxj2" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.523095 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4xn7\" (UniqueName: \"kubernetes.io/projected/c7148109-37fd-4199-9cca-df3f97d2d070-kube-api-access-k4xn7\") pod \"metallb-operator-webhook-server-6b886dc755-4pxj2\" (UID: \"c7148109-37fd-4199-9cca-df3f97d2d070\") " pod="metallb-system/metallb-operator-webhook-server-6b886dc755-4pxj2" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.637629 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-dfd88577c-t9fpt"] Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.637787 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b886dc755-4pxj2" Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.764827 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-dfd88577c-t9fpt" event={"ID":"8005c88f-1465-4e77-bcd5-b58fe22b8055","Type":"ContainerStarted","Data":"d509414ca932e65f5fc526f5e59b111e6082412ad4f18c8c4c0eccb0eaef921a"} Feb 16 17:15:23 crc kubenswrapper[4870]: I0216 17:15:23.947178 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j9h6z"] Feb 16 17:15:24 crc kubenswrapper[4870]: I0216 17:15:24.005252 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b886dc755-4pxj2"] Feb 16 17:15:24 crc kubenswrapper[4870]: W0216 17:15:24.020157 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7148109_37fd_4199_9cca_df3f97d2d070.slice/crio-a460b5a71f3a1fd45970b40fba22f646cc2134fc96ec8b38e744447265378747 WatchSource:0}: Error finding container a460b5a71f3a1fd45970b40fba22f646cc2134fc96ec8b38e744447265378747: Status 404 returned error can't find the container with id a460b5a71f3a1fd45970b40fba22f646cc2134fc96ec8b38e744447265378747 Feb 16 17:15:24 crc kubenswrapper[4870]: I0216 17:15:24.771838 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b886dc755-4pxj2" event={"ID":"c7148109-37fd-4199-9cca-df3f97d2d070","Type":"ContainerStarted","Data":"a460b5a71f3a1fd45970b40fba22f646cc2134fc96ec8b38e744447265378747"} Feb 16 17:15:24 crc kubenswrapper[4870]: I0216 17:15:24.773642 4870 generic.go:334] "Generic (PLEG): container finished" podID="b368e9d0-b052-4739-9091-f48ba8e4d65d" containerID="f90d89f8839352486b96bfce0c480236789c84e64e559774c67e0afb3a59cc19" exitCode=0 Feb 16 17:15:24 crc kubenswrapper[4870]: I0216 17:15:24.773679 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j9h6z" event={"ID":"b368e9d0-b052-4739-9091-f48ba8e4d65d","Type":"ContainerDied","Data":"f90d89f8839352486b96bfce0c480236789c84e64e559774c67e0afb3a59cc19"} Feb 16 17:15:24 crc kubenswrapper[4870]: I0216 17:15:24.773696 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j9h6z" event={"ID":"b368e9d0-b052-4739-9091-f48ba8e4d65d","Type":"ContainerStarted","Data":"a4526892ae16d4cccd2ac8c7c5078b28b1feb1ae593e70c0201bb63b24342544"} Feb 16 17:15:25 crc kubenswrapper[4870]: I0216 17:15:25.787934 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j9h6z" event={"ID":"b368e9d0-b052-4739-9091-f48ba8e4d65d","Type":"ContainerStarted","Data":"db0ff11df61209f83ed7c920be98fcfb9b5fbf61c83b503e05832772485fd4e1"} Feb 16 17:15:26 crc kubenswrapper[4870]: I0216 17:15:26.804441 4870 generic.go:334] "Generic (PLEG): container finished" podID="b368e9d0-b052-4739-9091-f48ba8e4d65d" containerID="db0ff11df61209f83ed7c920be98fcfb9b5fbf61c83b503e05832772485fd4e1" exitCode=0 Feb 16 17:15:26 crc kubenswrapper[4870]: I0216 17:15:26.805867 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j9h6z" event={"ID":"b368e9d0-b052-4739-9091-f48ba8e4d65d","Type":"ContainerDied","Data":"db0ff11df61209f83ed7c920be98fcfb9b5fbf61c83b503e05832772485fd4e1"} Feb 16 17:15:26 crc kubenswrapper[4870]: I0216 17:15:26.844045 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2l55b"] Feb 16 17:15:26 crc kubenswrapper[4870]: I0216 17:15:26.845131 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2l55b" Feb 16 17:15:26 crc kubenswrapper[4870]: I0216 17:15:26.849861 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2l55b"] Feb 16 17:15:26 crc kubenswrapper[4870]: I0216 17:15:26.958906 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef671923-9ea8-4aba-862d-ce9d82cbaab1-utilities\") pod \"certified-operators-2l55b\" (UID: \"ef671923-9ea8-4aba-862d-ce9d82cbaab1\") " pod="openshift-marketplace/certified-operators-2l55b" Feb 16 17:15:26 crc kubenswrapper[4870]: I0216 17:15:26.958994 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef671923-9ea8-4aba-862d-ce9d82cbaab1-catalog-content\") pod \"certified-operators-2l55b\" (UID: \"ef671923-9ea8-4aba-862d-ce9d82cbaab1\") " pod="openshift-marketplace/certified-operators-2l55b" Feb 16 17:15:26 crc kubenswrapper[4870]: I0216 17:15:26.959061 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8xtk\" (UniqueName: \"kubernetes.io/projected/ef671923-9ea8-4aba-862d-ce9d82cbaab1-kube-api-access-t8xtk\") pod \"certified-operators-2l55b\" (UID: \"ef671923-9ea8-4aba-862d-ce9d82cbaab1\") " pod="openshift-marketplace/certified-operators-2l55b" Feb 16 17:15:27 crc kubenswrapper[4870]: I0216 17:15:27.060020 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8xtk\" (UniqueName: \"kubernetes.io/projected/ef671923-9ea8-4aba-862d-ce9d82cbaab1-kube-api-access-t8xtk\") pod \"certified-operators-2l55b\" (UID: \"ef671923-9ea8-4aba-862d-ce9d82cbaab1\") " pod="openshift-marketplace/certified-operators-2l55b" Feb 16 17:15:27 crc kubenswrapper[4870]: I0216 17:15:27.060109 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef671923-9ea8-4aba-862d-ce9d82cbaab1-utilities\") pod \"certified-operators-2l55b\" (UID: \"ef671923-9ea8-4aba-862d-ce9d82cbaab1\") " pod="openshift-marketplace/certified-operators-2l55b" Feb 16 17:15:27 crc kubenswrapper[4870]: I0216 17:15:27.060146 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef671923-9ea8-4aba-862d-ce9d82cbaab1-catalog-content\") pod \"certified-operators-2l55b\" (UID: \"ef671923-9ea8-4aba-862d-ce9d82cbaab1\") " pod="openshift-marketplace/certified-operators-2l55b" Feb 16 17:15:27 crc kubenswrapper[4870]: I0216 17:15:27.060662 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef671923-9ea8-4aba-862d-ce9d82cbaab1-catalog-content\") pod \"certified-operators-2l55b\" (UID: \"ef671923-9ea8-4aba-862d-ce9d82cbaab1\") " pod="openshift-marketplace/certified-operators-2l55b" Feb 16 17:15:27 crc kubenswrapper[4870]: I0216 17:15:27.061372 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef671923-9ea8-4aba-862d-ce9d82cbaab1-utilities\") pod \"certified-operators-2l55b\" (UID: \"ef671923-9ea8-4aba-862d-ce9d82cbaab1\") " pod="openshift-marketplace/certified-operators-2l55b" Feb 16 17:15:27 crc kubenswrapper[4870]: I0216 17:15:27.097000 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8xtk\" (UniqueName: \"kubernetes.io/projected/ef671923-9ea8-4aba-862d-ce9d82cbaab1-kube-api-access-t8xtk\") pod \"certified-operators-2l55b\" (UID: \"ef671923-9ea8-4aba-862d-ce9d82cbaab1\") " pod="openshift-marketplace/certified-operators-2l55b" Feb 16 17:15:27 crc kubenswrapper[4870]: I0216 17:15:27.178168 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2l55b" Feb 16 17:15:28 crc kubenswrapper[4870]: I0216 17:15:28.266482 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2l55b"] Feb 16 17:15:29 crc kubenswrapper[4870]: W0216 17:15:29.868103 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef671923_9ea8_4aba_862d_ce9d82cbaab1.slice/crio-30d29405cf2855ab500bb9227e3f2c14a2189e29b66537252f2c9b99eca00af6 WatchSource:0}: Error finding container 30d29405cf2855ab500bb9227e3f2c14a2189e29b66537252f2c9b99eca00af6: Status 404 returned error can't find the container with id 30d29405cf2855ab500bb9227e3f2c14a2189e29b66537252f2c9b99eca00af6 Feb 16 17:15:31 crc kubenswrapper[4870]: I0216 17:15:31.137308 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-dfd88577c-t9fpt" event={"ID":"8005c88f-1465-4e77-bcd5-b58fe22b8055","Type":"ContainerStarted","Data":"92fcd4abd629e6960641a53f6571e4d7b3c3b33ea390f70580b1d664ab0e1aef"} Feb 16 17:15:31 crc kubenswrapper[4870]: I0216 17:15:31.147163 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-dfd88577c-t9fpt" Feb 16 17:15:31 crc kubenswrapper[4870]: I0216 17:15:31.148749 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j9h6z" event={"ID":"b368e9d0-b052-4739-9091-f48ba8e4d65d","Type":"ContainerStarted","Data":"c4653db2065941d09bf4e65aca015c97b9134332c7abedf3cc71d72c39b81d8e"} Feb 16 17:15:31 crc kubenswrapper[4870]: I0216 17:15:31.154424 4870 generic.go:334] "Generic (PLEG): container finished" podID="ef671923-9ea8-4aba-862d-ce9d82cbaab1" containerID="da141e738bfe263168cccf4b876298f656d849e80b3324239dc49e92d3138645" exitCode=0 Feb 16 17:15:31 crc kubenswrapper[4870]: I0216 17:15:31.154736 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2l55b" event={"ID":"ef671923-9ea8-4aba-862d-ce9d82cbaab1","Type":"ContainerDied","Data":"da141e738bfe263168cccf4b876298f656d849e80b3324239dc49e92d3138645"} Feb 16 17:15:31 crc kubenswrapper[4870]: I0216 17:15:31.154791 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2l55b" event={"ID":"ef671923-9ea8-4aba-862d-ce9d82cbaab1","Type":"ContainerStarted","Data":"30d29405cf2855ab500bb9227e3f2c14a2189e29b66537252f2c9b99eca00af6"} Feb 16 17:15:31 crc kubenswrapper[4870]: I0216 17:15:31.158214 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b886dc755-4pxj2" event={"ID":"c7148109-37fd-4199-9cca-df3f97d2d070","Type":"ContainerStarted","Data":"29641b59732defe228540e312d58e3ef560009d10f2203b74aeeb2fc9016120a"} Feb 16 17:15:31 crc kubenswrapper[4870]: I0216 17:15:31.158443 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6b886dc755-4pxj2" Feb 16 17:15:31 crc kubenswrapper[4870]: I0216 17:15:31.178977 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-dfd88577c-t9fpt" podStartSLOduration=2.9208604989999998 podStartE2EDuration="9.178942565s" podCreationTimestamp="2026-02-16 17:15:22 +0000 UTC" firstStartedPulling="2026-02-16 17:15:23.666545649 +0000 UTC m=+928.150010033" lastFinishedPulling="2026-02-16 17:15:29.924627715 +0000 UTC m=+934.408092099" observedRunningTime="2026-02-16 17:15:31.170135144 +0000 UTC m=+935.653599528" watchObservedRunningTime="2026-02-16 17:15:31.178942565 +0000 UTC m=+935.662406949" Feb 16 17:15:31 crc kubenswrapper[4870]: I0216 17:15:31.190689 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6b886dc755-4pxj2" podStartSLOduration=2.255583871 podStartE2EDuration="8.19067194s" podCreationTimestamp="2026-02-16 17:15:23 +0000 UTC" firstStartedPulling="2026-02-16 17:15:24.036179178 +0000 UTC m=+928.519643562" lastFinishedPulling="2026-02-16 17:15:29.971267247 +0000 UTC m=+934.454731631" observedRunningTime="2026-02-16 17:15:31.187937042 +0000 UTC m=+935.671401416" watchObservedRunningTime="2026-02-16 17:15:31.19067194 +0000 UTC m=+935.674136324" Feb 16 17:15:31 crc kubenswrapper[4870]: I0216 17:15:31.241071 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j9h6z" podStartSLOduration=3.074275958 podStartE2EDuration="8.241053429s" podCreationTimestamp="2026-02-16 17:15:23 +0000 UTC" firstStartedPulling="2026-02-16 17:15:24.775128967 +0000 UTC m=+929.258593351" lastFinishedPulling="2026-02-16 17:15:29.941906438 +0000 UTC m=+934.425370822" observedRunningTime="2026-02-16 17:15:31.225981749 +0000 UTC m=+935.709446133" watchObservedRunningTime="2026-02-16 17:15:31.241053429 +0000 UTC m=+935.724517813" Feb 16 17:15:32 crc kubenswrapper[4870]: I0216 17:15:32.166004 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2l55b" event={"ID":"ef671923-9ea8-4aba-862d-ce9d82cbaab1","Type":"ContainerStarted","Data":"634ef542b0f130e4a1fc2193cb94f5b7785f9fcc7d2f8fa004fc3d83d82dbb1c"} Feb 16 17:15:33 crc kubenswrapper[4870]: I0216 17:15:33.176609 4870 generic.go:334] "Generic (PLEG): container finished" podID="ef671923-9ea8-4aba-862d-ce9d82cbaab1" containerID="634ef542b0f130e4a1fc2193cb94f5b7785f9fcc7d2f8fa004fc3d83d82dbb1c" exitCode=0 Feb 16 17:15:33 crc kubenswrapper[4870]: I0216 17:15:33.176658 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2l55b" event={"ID":"ef671923-9ea8-4aba-862d-ce9d82cbaab1","Type":"ContainerDied","Data":"634ef542b0f130e4a1fc2193cb94f5b7785f9fcc7d2f8fa004fc3d83d82dbb1c"} Feb 16 17:15:33 crc kubenswrapper[4870]: I0216 17:15:33.348262 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j9h6z" Feb 16 17:15:33 crc kubenswrapper[4870]: I0216 17:15:33.348338 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j9h6z" Feb 16 17:15:33 crc kubenswrapper[4870]: I0216 17:15:33.417547 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j9h6z" Feb 16 17:15:34 crc kubenswrapper[4870]: I0216 17:15:34.191555 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2l55b" event={"ID":"ef671923-9ea8-4aba-862d-ce9d82cbaab1","Type":"ContainerStarted","Data":"90399df2bfce3c8cbf62f22c7195ecaa35fc5a942b806963d43370b08deb5172"} Feb 16 17:15:34 crc kubenswrapper[4870]: I0216 17:15:34.217412 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2l55b" podStartSLOduration=5.782751313 podStartE2EDuration="8.2173862s" podCreationTimestamp="2026-02-16 17:15:26 +0000 UTC" firstStartedPulling="2026-02-16 17:15:31.156897355 +0000 UTC m=+935.640361749" lastFinishedPulling="2026-02-16 17:15:33.591532262 +0000 UTC m=+938.074996636" observedRunningTime="2026-02-16 17:15:34.211200573 +0000 UTC m=+938.694664957" watchObservedRunningTime="2026-02-16 17:15:34.2173862 +0000 UTC m=+938.700850584" Feb 16 17:15:35 crc kubenswrapper[4870]: I0216 17:15:35.367622 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:15:35 crc kubenswrapper[4870]: I0216 17:15:35.368123 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:15:35 crc kubenswrapper[4870]: I0216 17:15:35.368221 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:15:35 crc kubenswrapper[4870]: I0216 17:15:35.369462 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae9b5f8dd0e4675f99af74251a96ffd60d2f653f4d32feb06324bf4aaba5fef5"} pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:15:35 crc kubenswrapper[4870]: I0216 17:15:35.369625 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" containerID="cri-o://ae9b5f8dd0e4675f99af74251a96ffd60d2f653f4d32feb06324bf4aaba5fef5" gracePeriod=600 Feb 16 17:15:36 crc kubenswrapper[4870]: I0216 17:15:36.219984 4870 generic.go:334] "Generic (PLEG): container finished" podID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerID="ae9b5f8dd0e4675f99af74251a96ffd60d2f653f4d32feb06324bf4aaba5fef5" exitCode=0 Feb 16 17:15:36 crc kubenswrapper[4870]: I0216 17:15:36.220073 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerDied","Data":"ae9b5f8dd0e4675f99af74251a96ffd60d2f653f4d32feb06324bf4aaba5fef5"} Feb 16 17:15:36 crc kubenswrapper[4870]: I0216 17:15:36.220360 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerStarted","Data":"c6cb73ad3168219aed3caa65ecbcfeaf20afa41eba328438ce91697a527d897b"} Feb 16 17:15:36 crc kubenswrapper[4870]: I0216 17:15:36.220386 4870 scope.go:117] "RemoveContainer" containerID="02e7dc6801b04294cf296b6adc3615ad5492b082048d6e70fbe5ab1eef8f5cb4" Feb 16 17:15:37 crc kubenswrapper[4870]: I0216 17:15:37.179364 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2l55b" Feb 16 17:15:37 crc kubenswrapper[4870]: I0216 17:15:37.180599 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2l55b" Feb 16 17:15:37 crc kubenswrapper[4870]: I0216 17:15:37.248445 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2l55b" Feb 16 17:15:43 crc kubenswrapper[4870]: I0216 17:15:43.393126 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j9h6z" Feb 16 17:15:43 crc kubenswrapper[4870]: I0216 17:15:43.449189 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j9h6z"] Feb 16 17:15:43 crc kubenswrapper[4870]: I0216 17:15:43.645233 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6b886dc755-4pxj2" Feb 16 17:15:44 crc kubenswrapper[4870]: I0216 17:15:44.277874 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j9h6z" podUID="b368e9d0-b052-4739-9091-f48ba8e4d65d" containerName="registry-server" containerID="cri-o://c4653db2065941d09bf4e65aca015c97b9134332c7abedf3cc71d72c39b81d8e" gracePeriod=2 Feb 16 17:15:44 crc kubenswrapper[4870]: I0216 17:15:44.671276 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j9h6z" Feb 16 17:15:44 crc kubenswrapper[4870]: I0216 17:15:44.816369 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b368e9d0-b052-4739-9091-f48ba8e4d65d-catalog-content\") pod \"b368e9d0-b052-4739-9091-f48ba8e4d65d\" (UID: \"b368e9d0-b052-4739-9091-f48ba8e4d65d\") " Feb 16 17:15:44 crc kubenswrapper[4870]: I0216 17:15:44.816468 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvvkf\" (UniqueName: \"kubernetes.io/projected/b368e9d0-b052-4739-9091-f48ba8e4d65d-kube-api-access-gvvkf\") pod \"b368e9d0-b052-4739-9091-f48ba8e4d65d\" (UID: \"b368e9d0-b052-4739-9091-f48ba8e4d65d\") " Feb 16 17:15:44 crc kubenswrapper[4870]: I0216 17:15:44.816493 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b368e9d0-b052-4739-9091-f48ba8e4d65d-utilities\") pod \"b368e9d0-b052-4739-9091-f48ba8e4d65d\" (UID: \"b368e9d0-b052-4739-9091-f48ba8e4d65d\") " Feb 16 17:15:44 crc kubenswrapper[4870]: I0216 17:15:44.817405 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b368e9d0-b052-4739-9091-f48ba8e4d65d-utilities" (OuterVolumeSpecName: "utilities") pod "b368e9d0-b052-4739-9091-f48ba8e4d65d" (UID: "b368e9d0-b052-4739-9091-f48ba8e4d65d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:15:44 crc kubenswrapper[4870]: I0216 17:15:44.826031 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b368e9d0-b052-4739-9091-f48ba8e4d65d-kube-api-access-gvvkf" (OuterVolumeSpecName: "kube-api-access-gvvkf") pod "b368e9d0-b052-4739-9091-f48ba8e4d65d" (UID: "b368e9d0-b052-4739-9091-f48ba8e4d65d"). InnerVolumeSpecName "kube-api-access-gvvkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:15:44 crc kubenswrapper[4870]: I0216 17:15:44.872149 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b368e9d0-b052-4739-9091-f48ba8e4d65d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b368e9d0-b052-4739-9091-f48ba8e4d65d" (UID: "b368e9d0-b052-4739-9091-f48ba8e4d65d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:15:44 crc kubenswrapper[4870]: I0216 17:15:44.917842 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvvkf\" (UniqueName: \"kubernetes.io/projected/b368e9d0-b052-4739-9091-f48ba8e4d65d-kube-api-access-gvvkf\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:44 crc kubenswrapper[4870]: I0216 17:15:44.917891 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b368e9d0-b052-4739-9091-f48ba8e4d65d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:44 crc kubenswrapper[4870]: I0216 17:15:44.917907 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b368e9d0-b052-4739-9091-f48ba8e4d65d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:45 crc kubenswrapper[4870]: I0216 17:15:45.287308 4870 generic.go:334] "Generic (PLEG): container finished" podID="b368e9d0-b052-4739-9091-f48ba8e4d65d" containerID="c4653db2065941d09bf4e65aca015c97b9134332c7abedf3cc71d72c39b81d8e" exitCode=0 Feb 16 17:15:45 crc kubenswrapper[4870]: I0216 17:15:45.287354 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j9h6z" event={"ID":"b368e9d0-b052-4739-9091-f48ba8e4d65d","Type":"ContainerDied","Data":"c4653db2065941d09bf4e65aca015c97b9134332c7abedf3cc71d72c39b81d8e"} Feb 16 17:15:45 crc kubenswrapper[4870]: I0216 17:15:45.287388 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j9h6z" event={"ID":"b368e9d0-b052-4739-9091-f48ba8e4d65d","Type":"ContainerDied","Data":"a4526892ae16d4cccd2ac8c7c5078b28b1feb1ae593e70c0201bb63b24342544"} Feb 16 17:15:45 crc kubenswrapper[4870]: I0216 17:15:45.287410 4870 scope.go:117] "RemoveContainer" containerID="c4653db2065941d09bf4e65aca015c97b9134332c7abedf3cc71d72c39b81d8e" Feb 16 17:15:45 crc kubenswrapper[4870]: I0216 17:15:45.287406 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j9h6z" Feb 16 17:15:45 crc kubenswrapper[4870]: I0216 17:15:45.302919 4870 scope.go:117] "RemoveContainer" containerID="db0ff11df61209f83ed7c920be98fcfb9b5fbf61c83b503e05832772485fd4e1" Feb 16 17:15:45 crc kubenswrapper[4870]: I0216 17:15:45.315318 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j9h6z"] Feb 16 17:15:45 crc kubenswrapper[4870]: I0216 17:15:45.320233 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j9h6z"] Feb 16 17:15:45 crc kubenswrapper[4870]: I0216 17:15:45.345960 4870 scope.go:117] "RemoveContainer" containerID="f90d89f8839352486b96bfce0c480236789c84e64e559774c67e0afb3a59cc19" Feb 16 17:15:45 crc kubenswrapper[4870]: I0216 17:15:45.360611 4870 scope.go:117] "RemoveContainer" containerID="c4653db2065941d09bf4e65aca015c97b9134332c7abedf3cc71d72c39b81d8e" Feb 16 17:15:45 crc kubenswrapper[4870]: E0216 17:15:45.361251 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4653db2065941d09bf4e65aca015c97b9134332c7abedf3cc71d72c39b81d8e\": container with ID starting with c4653db2065941d09bf4e65aca015c97b9134332c7abedf3cc71d72c39b81d8e not found: ID does not exist" containerID="c4653db2065941d09bf4e65aca015c97b9134332c7abedf3cc71d72c39b81d8e" Feb 16 17:15:45 crc kubenswrapper[4870]: I0216 17:15:45.361299 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4653db2065941d09bf4e65aca015c97b9134332c7abedf3cc71d72c39b81d8e"} err="failed to get container status \"c4653db2065941d09bf4e65aca015c97b9134332c7abedf3cc71d72c39b81d8e\": rpc error: code = NotFound desc = could not find container \"c4653db2065941d09bf4e65aca015c97b9134332c7abedf3cc71d72c39b81d8e\": container with ID starting with c4653db2065941d09bf4e65aca015c97b9134332c7abedf3cc71d72c39b81d8e not found: ID does not exist" Feb 16 17:15:45 crc kubenswrapper[4870]: I0216 17:15:45.361322 4870 scope.go:117] "RemoveContainer" containerID="db0ff11df61209f83ed7c920be98fcfb9b5fbf61c83b503e05832772485fd4e1" Feb 16 17:15:45 crc kubenswrapper[4870]: E0216 17:15:45.361529 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db0ff11df61209f83ed7c920be98fcfb9b5fbf61c83b503e05832772485fd4e1\": container with ID starting with db0ff11df61209f83ed7c920be98fcfb9b5fbf61c83b503e05832772485fd4e1 not found: ID does not exist" containerID="db0ff11df61209f83ed7c920be98fcfb9b5fbf61c83b503e05832772485fd4e1" Feb 16 17:15:45 crc kubenswrapper[4870]: I0216 17:15:45.361551 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0ff11df61209f83ed7c920be98fcfb9b5fbf61c83b503e05832772485fd4e1"} err="failed to get container status \"db0ff11df61209f83ed7c920be98fcfb9b5fbf61c83b503e05832772485fd4e1\": rpc error: code = NotFound desc = could not find container \"db0ff11df61209f83ed7c920be98fcfb9b5fbf61c83b503e05832772485fd4e1\": container with ID starting with db0ff11df61209f83ed7c920be98fcfb9b5fbf61c83b503e05832772485fd4e1 not found: ID does not exist" Feb 16 17:15:45 crc kubenswrapper[4870]: I0216 17:15:45.361566 4870 scope.go:117] "RemoveContainer" containerID="f90d89f8839352486b96bfce0c480236789c84e64e559774c67e0afb3a59cc19" Feb 16 17:15:45 crc kubenswrapper[4870]: E0216 17:15:45.361759 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f90d89f8839352486b96bfce0c480236789c84e64e559774c67e0afb3a59cc19\": container with ID starting with f90d89f8839352486b96bfce0c480236789c84e64e559774c67e0afb3a59cc19 not found: ID does not exist" containerID="f90d89f8839352486b96bfce0c480236789c84e64e559774c67e0afb3a59cc19" Feb 16 17:15:45 crc kubenswrapper[4870]: I0216 17:15:45.361996 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f90d89f8839352486b96bfce0c480236789c84e64e559774c67e0afb3a59cc19"} err="failed to get container status \"f90d89f8839352486b96bfce0c480236789c84e64e559774c67e0afb3a59cc19\": rpc error: code = NotFound desc = could not find container \"f90d89f8839352486b96bfce0c480236789c84e64e559774c67e0afb3a59cc19\": container with ID starting with f90d89f8839352486b96bfce0c480236789c84e64e559774c67e0afb3a59cc19 not found: ID does not exist" Feb 16 17:15:46 crc kubenswrapper[4870]: I0216 17:15:46.231071 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b368e9d0-b052-4739-9091-f48ba8e4d65d" path="/var/lib/kubelet/pods/b368e9d0-b052-4739-9091-f48ba8e4d65d/volumes" Feb 16 17:15:47 crc kubenswrapper[4870]: I0216 17:15:47.219835 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2l55b" Feb 16 17:15:49 crc kubenswrapper[4870]: I0216 17:15:49.228856 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2l55b"] Feb 16 17:15:49 crc kubenswrapper[4870]: I0216 17:15:49.229184 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2l55b" podUID="ef671923-9ea8-4aba-862d-ce9d82cbaab1" containerName="registry-server" containerID="cri-o://90399df2bfce3c8cbf62f22c7195ecaa35fc5a942b806963d43370b08deb5172" gracePeriod=2 Feb 16 17:15:49 crc kubenswrapper[4870]: I0216 17:15:49.621958 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2l55b" Feb 16 17:15:49 crc kubenswrapper[4870]: I0216 17:15:49.778792 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef671923-9ea8-4aba-862d-ce9d82cbaab1-catalog-content\") pod \"ef671923-9ea8-4aba-862d-ce9d82cbaab1\" (UID: \"ef671923-9ea8-4aba-862d-ce9d82cbaab1\") " Feb 16 17:15:49 crc kubenswrapper[4870]: I0216 17:15:49.778863 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8xtk\" (UniqueName: \"kubernetes.io/projected/ef671923-9ea8-4aba-862d-ce9d82cbaab1-kube-api-access-t8xtk\") pod \"ef671923-9ea8-4aba-862d-ce9d82cbaab1\" (UID: \"ef671923-9ea8-4aba-862d-ce9d82cbaab1\") " Feb 16 17:15:49 crc kubenswrapper[4870]: I0216 17:15:49.779006 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef671923-9ea8-4aba-862d-ce9d82cbaab1-utilities\") pod \"ef671923-9ea8-4aba-862d-ce9d82cbaab1\" (UID: \"ef671923-9ea8-4aba-862d-ce9d82cbaab1\") " Feb 16 17:15:49 crc kubenswrapper[4870]: I0216 17:15:49.780644 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef671923-9ea8-4aba-862d-ce9d82cbaab1-utilities" (OuterVolumeSpecName: "utilities") pod "ef671923-9ea8-4aba-862d-ce9d82cbaab1" (UID: "ef671923-9ea8-4aba-862d-ce9d82cbaab1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:15:49 crc kubenswrapper[4870]: I0216 17:15:49.785185 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef671923-9ea8-4aba-862d-ce9d82cbaab1-kube-api-access-t8xtk" (OuterVolumeSpecName: "kube-api-access-t8xtk") pod "ef671923-9ea8-4aba-862d-ce9d82cbaab1" (UID: "ef671923-9ea8-4aba-862d-ce9d82cbaab1"). InnerVolumeSpecName "kube-api-access-t8xtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:15:49 crc kubenswrapper[4870]: I0216 17:15:49.833981 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef671923-9ea8-4aba-862d-ce9d82cbaab1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef671923-9ea8-4aba-862d-ce9d82cbaab1" (UID: "ef671923-9ea8-4aba-862d-ce9d82cbaab1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:15:49 crc kubenswrapper[4870]: I0216 17:15:49.880441 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef671923-9ea8-4aba-862d-ce9d82cbaab1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:49 crc kubenswrapper[4870]: I0216 17:15:49.880469 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8xtk\" (UniqueName: \"kubernetes.io/projected/ef671923-9ea8-4aba-862d-ce9d82cbaab1-kube-api-access-t8xtk\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:49 crc kubenswrapper[4870]: I0216 17:15:49.880481 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef671923-9ea8-4aba-862d-ce9d82cbaab1-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:15:50 crc kubenswrapper[4870]: I0216 17:15:50.318374 4870 generic.go:334] "Generic (PLEG): container finished" podID="ef671923-9ea8-4aba-862d-ce9d82cbaab1" containerID="90399df2bfce3c8cbf62f22c7195ecaa35fc5a942b806963d43370b08deb5172" exitCode=0 Feb 16 17:15:50 crc kubenswrapper[4870]: I0216 17:15:50.318420 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2l55b" event={"ID":"ef671923-9ea8-4aba-862d-ce9d82cbaab1","Type":"ContainerDied","Data":"90399df2bfce3c8cbf62f22c7195ecaa35fc5a942b806963d43370b08deb5172"} Feb 16 17:15:50 crc kubenswrapper[4870]: I0216 17:15:50.318685 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2l55b" event={"ID":"ef671923-9ea8-4aba-862d-ce9d82cbaab1","Type":"ContainerDied","Data":"30d29405cf2855ab500bb9227e3f2c14a2189e29b66537252f2c9b99eca00af6"} Feb 16 17:15:50 crc kubenswrapper[4870]: I0216 17:15:50.318707 4870 scope.go:117] "RemoveContainer" containerID="90399df2bfce3c8cbf62f22c7195ecaa35fc5a942b806963d43370b08deb5172" Feb 16 17:15:50 crc kubenswrapper[4870]: I0216 17:15:50.318439 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2l55b" Feb 16 17:15:50 crc kubenswrapper[4870]: I0216 17:15:50.333669 4870 scope.go:117] "RemoveContainer" containerID="634ef542b0f130e4a1fc2193cb94f5b7785f9fcc7d2f8fa004fc3d83d82dbb1c" Feb 16 17:15:50 crc kubenswrapper[4870]: I0216 17:15:50.333899 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2l55b"] Feb 16 17:15:50 crc kubenswrapper[4870]: I0216 17:15:50.340799 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2l55b"] Feb 16 17:15:50 crc kubenswrapper[4870]: I0216 17:15:50.352938 4870 scope.go:117] "RemoveContainer" containerID="da141e738bfe263168cccf4b876298f656d849e80b3324239dc49e92d3138645" Feb 16 17:15:50 crc kubenswrapper[4870]: I0216 17:15:50.370868 4870 scope.go:117] "RemoveContainer" containerID="90399df2bfce3c8cbf62f22c7195ecaa35fc5a942b806963d43370b08deb5172" Feb 16 17:15:50 crc kubenswrapper[4870]: E0216 17:15:50.371461 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90399df2bfce3c8cbf62f22c7195ecaa35fc5a942b806963d43370b08deb5172\": container with ID starting with 90399df2bfce3c8cbf62f22c7195ecaa35fc5a942b806963d43370b08deb5172 not found: ID does not exist" containerID="90399df2bfce3c8cbf62f22c7195ecaa35fc5a942b806963d43370b08deb5172" Feb 16 17:15:50 crc kubenswrapper[4870]: I0216 17:15:50.371517 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90399df2bfce3c8cbf62f22c7195ecaa35fc5a942b806963d43370b08deb5172"} err="failed to get container status \"90399df2bfce3c8cbf62f22c7195ecaa35fc5a942b806963d43370b08deb5172\": rpc error: code = NotFound desc = could not find container \"90399df2bfce3c8cbf62f22c7195ecaa35fc5a942b806963d43370b08deb5172\": container with ID starting with 90399df2bfce3c8cbf62f22c7195ecaa35fc5a942b806963d43370b08deb5172 not found: ID does not exist" Feb 16 17:15:50 crc kubenswrapper[4870]: I0216 17:15:50.371553 4870 scope.go:117] "RemoveContainer" containerID="634ef542b0f130e4a1fc2193cb94f5b7785f9fcc7d2f8fa004fc3d83d82dbb1c" Feb 16 17:15:50 crc kubenswrapper[4870]: E0216 17:15:50.372028 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"634ef542b0f130e4a1fc2193cb94f5b7785f9fcc7d2f8fa004fc3d83d82dbb1c\": container with ID starting with 634ef542b0f130e4a1fc2193cb94f5b7785f9fcc7d2f8fa004fc3d83d82dbb1c not found: ID does not exist" containerID="634ef542b0f130e4a1fc2193cb94f5b7785f9fcc7d2f8fa004fc3d83d82dbb1c" Feb 16 17:15:50 crc kubenswrapper[4870]: I0216 17:15:50.372069 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"634ef542b0f130e4a1fc2193cb94f5b7785f9fcc7d2f8fa004fc3d83d82dbb1c"} err="failed to get container status \"634ef542b0f130e4a1fc2193cb94f5b7785f9fcc7d2f8fa004fc3d83d82dbb1c\": rpc error: code = NotFound desc = could not find container \"634ef542b0f130e4a1fc2193cb94f5b7785f9fcc7d2f8fa004fc3d83d82dbb1c\": container with ID starting with 634ef542b0f130e4a1fc2193cb94f5b7785f9fcc7d2f8fa004fc3d83d82dbb1c not found: ID does not exist" Feb 16 17:15:50 crc kubenswrapper[4870]: I0216 17:15:50.372098 4870 scope.go:117] "RemoveContainer" containerID="da141e738bfe263168cccf4b876298f656d849e80b3324239dc49e92d3138645" Feb 16 17:15:50 crc kubenswrapper[4870]: E0216 17:15:50.372418 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da141e738bfe263168cccf4b876298f656d849e80b3324239dc49e92d3138645\": container with ID starting with da141e738bfe263168cccf4b876298f656d849e80b3324239dc49e92d3138645 not found: ID does not exist" containerID="da141e738bfe263168cccf4b876298f656d849e80b3324239dc49e92d3138645" Feb 16 17:15:50 crc kubenswrapper[4870]: I0216 17:15:50.372439 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da141e738bfe263168cccf4b876298f656d849e80b3324239dc49e92d3138645"} err="failed to get container status \"da141e738bfe263168cccf4b876298f656d849e80b3324239dc49e92d3138645\": rpc error: code = NotFound desc = could not find container \"da141e738bfe263168cccf4b876298f656d849e80b3324239dc49e92d3138645\": container with ID starting with da141e738bfe263168cccf4b876298f656d849e80b3324239dc49e92d3138645 not found: ID does not exist" Feb 16 17:15:52 crc kubenswrapper[4870]: I0216 17:15:52.230989 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef671923-9ea8-4aba-862d-ce9d82cbaab1" path="/var/lib/kubelet/pods/ef671923-9ea8-4aba-862d-ce9d82cbaab1/volumes" Feb 16 17:15:54 crc kubenswrapper[4870]: I0216 17:15:54.440677 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lnfwq"] Feb 16 17:15:54 crc kubenswrapper[4870]: E0216 17:15:54.441123 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b368e9d0-b052-4739-9091-f48ba8e4d65d" containerName="extract-utilities" Feb 16 17:15:54 crc kubenswrapper[4870]: I0216 17:15:54.441148 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b368e9d0-b052-4739-9091-f48ba8e4d65d" containerName="extract-utilities" Feb 16 17:15:54 crc kubenswrapper[4870]: E0216 17:15:54.441165 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b368e9d0-b052-4739-9091-f48ba8e4d65d" containerName="extract-content" Feb 16 17:15:54 crc kubenswrapper[4870]: I0216 17:15:54.441176 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b368e9d0-b052-4739-9091-f48ba8e4d65d" containerName="extract-content" Feb 16 17:15:54 crc kubenswrapper[4870]: E0216 17:15:54.441198 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b368e9d0-b052-4739-9091-f48ba8e4d65d" containerName="registry-server" Feb 16 17:15:54 crc kubenswrapper[4870]: I0216 17:15:54.441208 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b368e9d0-b052-4739-9091-f48ba8e4d65d" containerName="registry-server" Feb 16 17:15:54 crc kubenswrapper[4870]: E0216 17:15:54.441229 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef671923-9ea8-4aba-862d-ce9d82cbaab1" containerName="extract-content" Feb 16 17:15:54 crc kubenswrapper[4870]: I0216 17:15:54.441239 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef671923-9ea8-4aba-862d-ce9d82cbaab1" containerName="extract-content" Feb 16 17:15:54 crc kubenswrapper[4870]: E0216 17:15:54.441253 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef671923-9ea8-4aba-862d-ce9d82cbaab1" containerName="extract-utilities" Feb 16 17:15:54 crc kubenswrapper[4870]: I0216 17:15:54.441263 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef671923-9ea8-4aba-862d-ce9d82cbaab1" containerName="extract-utilities" Feb 16 17:15:54 crc kubenswrapper[4870]: E0216 17:15:54.441279 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef671923-9ea8-4aba-862d-ce9d82cbaab1" containerName="registry-server" Feb 16 17:15:54 crc kubenswrapper[4870]: I0216 17:15:54.441291 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef671923-9ea8-4aba-862d-ce9d82cbaab1" containerName="registry-server" Feb 16 17:15:54 crc kubenswrapper[4870]: I0216 17:15:54.441474 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef671923-9ea8-4aba-862d-ce9d82cbaab1" containerName="registry-server" Feb 16 17:15:54 crc kubenswrapper[4870]: I0216 17:15:54.441498 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="b368e9d0-b052-4739-9091-f48ba8e4d65d" containerName="registry-server" Feb 16 17:15:54 crc kubenswrapper[4870]: I0216 17:15:54.442701 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lnfwq" Feb 16 17:15:54 crc kubenswrapper[4870]: I0216 17:15:54.469920 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lnfwq"] Feb 16 17:15:54 crc kubenswrapper[4870]: I0216 17:15:54.638073 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2a9ae14-e8f8-4907-ae7a-ec7280968b16-utilities\") pod \"redhat-marketplace-lnfwq\" (UID: \"e2a9ae14-e8f8-4907-ae7a-ec7280968b16\") " pod="openshift-marketplace/redhat-marketplace-lnfwq" Feb 16 17:15:54 crc kubenswrapper[4870]: I0216 17:15:54.638273 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szvr2\" (UniqueName: \"kubernetes.io/projected/e2a9ae14-e8f8-4907-ae7a-ec7280968b16-kube-api-access-szvr2\") pod \"redhat-marketplace-lnfwq\" (UID: \"e2a9ae14-e8f8-4907-ae7a-ec7280968b16\") " pod="openshift-marketplace/redhat-marketplace-lnfwq" Feb 16 17:15:54 crc kubenswrapper[4870]: I0216 17:15:54.638383 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2a9ae14-e8f8-4907-ae7a-ec7280968b16-catalog-content\") pod \"redhat-marketplace-lnfwq\" (UID: \"e2a9ae14-e8f8-4907-ae7a-ec7280968b16\") " pod="openshift-marketplace/redhat-marketplace-lnfwq" Feb 16 17:15:54 crc kubenswrapper[4870]: I0216 17:15:54.740075 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szvr2\" (UniqueName: \"kubernetes.io/projected/e2a9ae14-e8f8-4907-ae7a-ec7280968b16-kube-api-access-szvr2\") pod \"redhat-marketplace-lnfwq\" (UID: \"e2a9ae14-e8f8-4907-ae7a-ec7280968b16\") " pod="openshift-marketplace/redhat-marketplace-lnfwq" Feb 16 17:15:54 crc kubenswrapper[4870]: I0216 17:15:54.740155 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2a9ae14-e8f8-4907-ae7a-ec7280968b16-catalog-content\") pod \"redhat-marketplace-lnfwq\" (UID: \"e2a9ae14-e8f8-4907-ae7a-ec7280968b16\") " pod="openshift-marketplace/redhat-marketplace-lnfwq" Feb 16 17:15:54 crc kubenswrapper[4870]: I0216 17:15:54.740219 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2a9ae14-e8f8-4907-ae7a-ec7280968b16-utilities\") pod \"redhat-marketplace-lnfwq\" (UID: \"e2a9ae14-e8f8-4907-ae7a-ec7280968b16\") " pod="openshift-marketplace/redhat-marketplace-lnfwq" Feb 16 17:15:54 crc kubenswrapper[4870]: I0216 17:15:54.740622 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2a9ae14-e8f8-4907-ae7a-ec7280968b16-catalog-content\") pod \"redhat-marketplace-lnfwq\" (UID: \"e2a9ae14-e8f8-4907-ae7a-ec7280968b16\") " pod="openshift-marketplace/redhat-marketplace-lnfwq" Feb 16 17:15:54 crc kubenswrapper[4870]: I0216 17:15:54.740702 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2a9ae14-e8f8-4907-ae7a-ec7280968b16-utilities\") pod \"redhat-marketplace-lnfwq\" (UID: \"e2a9ae14-e8f8-4907-ae7a-ec7280968b16\") " pod="openshift-marketplace/redhat-marketplace-lnfwq" Feb 16 17:15:54 crc kubenswrapper[4870]: I0216 17:15:54.760764 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szvr2\" (UniqueName: \"kubernetes.io/projected/e2a9ae14-e8f8-4907-ae7a-ec7280968b16-kube-api-access-szvr2\") pod \"redhat-marketplace-lnfwq\" (UID: \"e2a9ae14-e8f8-4907-ae7a-ec7280968b16\") " pod="openshift-marketplace/redhat-marketplace-lnfwq" Feb 16 17:15:54 crc kubenswrapper[4870]: I0216 17:15:54.771519 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lnfwq" Feb 16 17:15:55 crc kubenswrapper[4870]: I0216 17:15:55.193102 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lnfwq"] Feb 16 17:15:55 crc kubenswrapper[4870]: I0216 17:15:55.378735 4870 generic.go:334] "Generic (PLEG): container finished" podID="e2a9ae14-e8f8-4907-ae7a-ec7280968b16" containerID="ceb797467672dc912fde63fc1a86f2d1455e156a2ebf5db47d717ce2f0fdc82d" exitCode=0 Feb 16 17:15:55 crc kubenswrapper[4870]: I0216 17:15:55.378913 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lnfwq" event={"ID":"e2a9ae14-e8f8-4907-ae7a-ec7280968b16","Type":"ContainerDied","Data":"ceb797467672dc912fde63fc1a86f2d1455e156a2ebf5db47d717ce2f0fdc82d"} Feb 16 17:15:55 crc kubenswrapper[4870]: I0216 17:15:55.379137 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lnfwq" event={"ID":"e2a9ae14-e8f8-4907-ae7a-ec7280968b16","Type":"ContainerStarted","Data":"e831de3225c4316e63074f1d67386c181e0742cbeee45f4433fb0c114dcea615"} Feb 16 17:15:56 crc kubenswrapper[4870]: I0216 17:15:56.385645 4870 generic.go:334] "Generic (PLEG): container finished" podID="e2a9ae14-e8f8-4907-ae7a-ec7280968b16" containerID="2edec9f0a411590de2f739b15dac9cdeed1428868d2f63d695dc7a7772d76acb" exitCode=0 Feb 16 17:15:56 crc kubenswrapper[4870]: I0216 17:15:56.385715 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lnfwq" event={"ID":"e2a9ae14-e8f8-4907-ae7a-ec7280968b16","Type":"ContainerDied","Data":"2edec9f0a411590de2f739b15dac9cdeed1428868d2f63d695dc7a7772d76acb"} Feb 16 17:15:57 crc kubenswrapper[4870]: I0216 17:15:57.395486 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lnfwq" event={"ID":"e2a9ae14-e8f8-4907-ae7a-ec7280968b16","Type":"ContainerStarted","Data":"ee12dff3cd1aaab9410078112477f082cef87e57b2681883aa841b9ac9b71e52"} Feb 16 17:15:57 crc kubenswrapper[4870]: I0216 17:15:57.413467 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lnfwq" podStartSLOduration=1.9178601560000001 podStartE2EDuration="3.413450978s" podCreationTimestamp="2026-02-16 17:15:54 +0000 UTC" firstStartedPulling="2026-02-16 17:15:55.380335511 +0000 UTC m=+959.863799895" lastFinishedPulling="2026-02-16 17:15:56.875926333 +0000 UTC m=+961.359390717" observedRunningTime="2026-02-16 17:15:57.411379849 +0000 UTC m=+961.894844263" watchObservedRunningTime="2026-02-16 17:15:57.413450978 +0000 UTC m=+961.896915362" Feb 16 17:16:03 crc kubenswrapper[4870]: I0216 17:16:03.272768 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-dfd88577c-t9fpt" Feb 16 17:16:03 crc kubenswrapper[4870]: I0216 17:16:03.944931 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-qg9jr"] Feb 16 17:16:03 crc kubenswrapper[4870]: I0216 17:16:03.948106 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:03 crc kubenswrapper[4870]: I0216 17:16:03.950908 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-nxmrv" Feb 16 17:16:03 crc kubenswrapper[4870]: I0216 17:16:03.951333 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 16 17:16:03 crc kubenswrapper[4870]: I0216 17:16:03.952580 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-sl5w2"] Feb 16 17:16:03 crc kubenswrapper[4870]: I0216 17:16:03.953230 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 16 17:16:03 crc kubenswrapper[4870]: I0216 17:16:03.953695 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sl5w2" Feb 16 17:16:03 crc kubenswrapper[4870]: I0216 17:16:03.956742 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.010373 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-sl5w2"] Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.069529 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggvpr\" (UniqueName: \"kubernetes.io/projected/601db539-ff99-4286-b333-89a76d744d27-kube-api-access-ggvpr\") pod \"frr-k8s-webhook-server-78b44bf5bb-sl5w2\" (UID: \"601db539-ff99-4286-b333-89a76d744d27\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sl5w2" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.069603 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t9hf\" (UniqueName: \"kubernetes.io/projected/d51255e7-059d-4c69-9a01-f90249fe53bf-kube-api-access-6t9hf\") pod \"frr-k8s-qg9jr\" (UID: \"d51255e7-059d-4c69-9a01-f90249fe53bf\") " pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.069630 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d51255e7-059d-4c69-9a01-f90249fe53bf-metrics\") pod \"frr-k8s-qg9jr\" (UID: \"d51255e7-059d-4c69-9a01-f90249fe53bf\") " pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.069649 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d51255e7-059d-4c69-9a01-f90249fe53bf-frr-startup\") pod \"frr-k8s-qg9jr\" (UID: \"d51255e7-059d-4c69-9a01-f90249fe53bf\") " pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.069680 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d51255e7-059d-4c69-9a01-f90249fe53bf-frr-sockets\") pod \"frr-k8s-qg9jr\" (UID: \"d51255e7-059d-4c69-9a01-f90249fe53bf\") " pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.069694 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d51255e7-059d-4c69-9a01-f90249fe53bf-metrics-certs\") pod \"frr-k8s-qg9jr\" (UID: \"d51255e7-059d-4c69-9a01-f90249fe53bf\") " pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.069710 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d51255e7-059d-4c69-9a01-f90249fe53bf-frr-conf\") pod \"frr-k8s-qg9jr\" (UID: \"d51255e7-059d-4c69-9a01-f90249fe53bf\") " pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.069725 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d51255e7-059d-4c69-9a01-f90249fe53bf-reloader\") pod \"frr-k8s-qg9jr\" (UID: \"d51255e7-059d-4c69-9a01-f90249fe53bf\") " pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.069743 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/601db539-ff99-4286-b333-89a76d744d27-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-sl5w2\" (UID: \"601db539-ff99-4286-b333-89a76d744d27\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sl5w2" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.113553 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-2qwb2"] Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.116966 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-2qwb2" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.121592 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.127878 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-fh77x"] Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.128925 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fh77x" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.134431 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-4fs58" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.134592 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.134687 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.134804 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.140437 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-2qwb2"] Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.171678 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t9hf\" (UniqueName: \"kubernetes.io/projected/d51255e7-059d-4c69-9a01-f90249fe53bf-kube-api-access-6t9hf\") pod \"frr-k8s-qg9jr\" (UID: \"d51255e7-059d-4c69-9a01-f90249fe53bf\") " pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.171734 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d51255e7-059d-4c69-9a01-f90249fe53bf-metrics\") pod \"frr-k8s-qg9jr\" (UID: \"d51255e7-059d-4c69-9a01-f90249fe53bf\") " pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.171755 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d51255e7-059d-4c69-9a01-f90249fe53bf-frr-startup\") pod \"frr-k8s-qg9jr\" (UID: \"d51255e7-059d-4c69-9a01-f90249fe53bf\") " pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.171786 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d51255e7-059d-4c69-9a01-f90249fe53bf-frr-sockets\") pod \"frr-k8s-qg9jr\" (UID: \"d51255e7-059d-4c69-9a01-f90249fe53bf\") " pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.171805 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d51255e7-059d-4c69-9a01-f90249fe53bf-metrics-certs\") pod \"frr-k8s-qg9jr\" (UID: \"d51255e7-059d-4c69-9a01-f90249fe53bf\") " pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.171822 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d51255e7-059d-4c69-9a01-f90249fe53bf-frr-conf\") pod \"frr-k8s-qg9jr\" (UID: \"d51255e7-059d-4c69-9a01-f90249fe53bf\") " pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.171838 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d51255e7-059d-4c69-9a01-f90249fe53bf-reloader\") pod \"frr-k8s-qg9jr\" (UID: \"d51255e7-059d-4c69-9a01-f90249fe53bf\") " pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.171856 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/601db539-ff99-4286-b333-89a76d744d27-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-sl5w2\" (UID: \"601db539-ff99-4286-b333-89a76d744d27\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sl5w2" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.171884 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggvpr\" (UniqueName: \"kubernetes.io/projected/601db539-ff99-4286-b333-89a76d744d27-kube-api-access-ggvpr\") pod \"frr-k8s-webhook-server-78b44bf5bb-sl5w2\" (UID: \"601db539-ff99-4286-b333-89a76d744d27\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sl5w2" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.172433 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d51255e7-059d-4c69-9a01-f90249fe53bf-metrics\") pod \"frr-k8s-qg9jr\" (UID: \"d51255e7-059d-4c69-9a01-f90249fe53bf\") " pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.172610 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d51255e7-059d-4c69-9a01-f90249fe53bf-frr-conf\") pod \"frr-k8s-qg9jr\" (UID: \"d51255e7-059d-4c69-9a01-f90249fe53bf\") " pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.172772 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d51255e7-059d-4c69-9a01-f90249fe53bf-reloader\") pod \"frr-k8s-qg9jr\" (UID: \"d51255e7-059d-4c69-9a01-f90249fe53bf\") " pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.174634 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d51255e7-059d-4c69-9a01-f90249fe53bf-frr-sockets\") pod \"frr-k8s-qg9jr\" (UID: \"d51255e7-059d-4c69-9a01-f90249fe53bf\") " pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.175142 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d51255e7-059d-4c69-9a01-f90249fe53bf-frr-startup\") pod \"frr-k8s-qg9jr\" (UID: \"d51255e7-059d-4c69-9a01-f90249fe53bf\") " pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.186253 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/601db539-ff99-4286-b333-89a76d744d27-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-sl5w2\" (UID: \"601db539-ff99-4286-b333-89a76d744d27\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sl5w2" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.189731 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d51255e7-059d-4c69-9a01-f90249fe53bf-metrics-certs\") pod \"frr-k8s-qg9jr\" (UID: \"d51255e7-059d-4c69-9a01-f90249fe53bf\") " pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.212180 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t9hf\" (UniqueName: \"kubernetes.io/projected/d51255e7-059d-4c69-9a01-f90249fe53bf-kube-api-access-6t9hf\") pod \"frr-k8s-qg9jr\" (UID: \"d51255e7-059d-4c69-9a01-f90249fe53bf\") " pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.229633 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggvpr\" (UniqueName: \"kubernetes.io/projected/601db539-ff99-4286-b333-89a76d744d27-kube-api-access-ggvpr\") pod \"frr-k8s-webhook-server-78b44bf5bb-sl5w2\" (UID: \"601db539-ff99-4286-b333-89a76d744d27\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sl5w2" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.272900 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d1cdf458-f970-4341-b2e9-f0752bf88a9c-metallb-excludel2\") pod \"speaker-fh77x\" (UID: \"d1cdf458-f970-4341-b2e9-f0752bf88a9c\") " pod="metallb-system/speaker-fh77x" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.272960 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1cdf458-f970-4341-b2e9-f0752bf88a9c-metrics-certs\") pod \"speaker-fh77x\" (UID: \"d1cdf458-f970-4341-b2e9-f0752bf88a9c\") " pod="metallb-system/speaker-fh77x" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.272982 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b8e45f6-6719-4e08-832f-fb4074dc21b7-metrics-certs\") pod \"controller-69bbfbf88f-2qwb2\" (UID: \"6b8e45f6-6719-4e08-832f-fb4074dc21b7\") " pod="metallb-system/controller-69bbfbf88f-2qwb2" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.273016 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d1cdf458-f970-4341-b2e9-f0752bf88a9c-memberlist\") pod \"speaker-fh77x\" (UID: \"d1cdf458-f970-4341-b2e9-f0752bf88a9c\") " pod="metallb-system/speaker-fh77x" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.273035 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5r9q\" (UniqueName: \"kubernetes.io/projected/6b8e45f6-6719-4e08-832f-fb4074dc21b7-kube-api-access-b5r9q\") pod \"controller-69bbfbf88f-2qwb2\" (UID: \"6b8e45f6-6719-4e08-832f-fb4074dc21b7\") " pod="metallb-system/controller-69bbfbf88f-2qwb2" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.273078 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t8cq\" (UniqueName: \"kubernetes.io/projected/d1cdf458-f970-4341-b2e9-f0752bf88a9c-kube-api-access-8t8cq\") pod \"speaker-fh77x\" (UID: \"d1cdf458-f970-4341-b2e9-f0752bf88a9c\") " pod="metallb-system/speaker-fh77x" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.273095 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b8e45f6-6719-4e08-832f-fb4074dc21b7-cert\") pod \"controller-69bbfbf88f-2qwb2\" (UID: \"6b8e45f6-6719-4e08-832f-fb4074dc21b7\") " pod="metallb-system/controller-69bbfbf88f-2qwb2" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.276790 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.295348 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sl5w2" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.374079 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t8cq\" (UniqueName: \"kubernetes.io/projected/d1cdf458-f970-4341-b2e9-f0752bf88a9c-kube-api-access-8t8cq\") pod \"speaker-fh77x\" (UID: \"d1cdf458-f970-4341-b2e9-f0752bf88a9c\") " pod="metallb-system/speaker-fh77x" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.374493 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b8e45f6-6719-4e08-832f-fb4074dc21b7-cert\") pod \"controller-69bbfbf88f-2qwb2\" (UID: \"6b8e45f6-6719-4e08-832f-fb4074dc21b7\") " pod="metallb-system/controller-69bbfbf88f-2qwb2" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.374538 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d1cdf458-f970-4341-b2e9-f0752bf88a9c-metallb-excludel2\") pod \"speaker-fh77x\" (UID: \"d1cdf458-f970-4341-b2e9-f0752bf88a9c\") " pod="metallb-system/speaker-fh77x" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.374568 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1cdf458-f970-4341-b2e9-f0752bf88a9c-metrics-certs\") pod \"speaker-fh77x\" (UID: \"d1cdf458-f970-4341-b2e9-f0752bf88a9c\") " pod="metallb-system/speaker-fh77x" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.374602 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b8e45f6-6719-4e08-832f-fb4074dc21b7-metrics-certs\") pod \"controller-69bbfbf88f-2qwb2\" (UID: \"6b8e45f6-6719-4e08-832f-fb4074dc21b7\") " pod="metallb-system/controller-69bbfbf88f-2qwb2" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.374646 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d1cdf458-f970-4341-b2e9-f0752bf88a9c-memberlist\") pod \"speaker-fh77x\" (UID: \"d1cdf458-f970-4341-b2e9-f0752bf88a9c\") " pod="metallb-system/speaker-fh77x" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.374674 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5r9q\" (UniqueName: \"kubernetes.io/projected/6b8e45f6-6719-4e08-832f-fb4074dc21b7-kube-api-access-b5r9q\") pod \"controller-69bbfbf88f-2qwb2\" (UID: \"6b8e45f6-6719-4e08-832f-fb4074dc21b7\") " pod="metallb-system/controller-69bbfbf88f-2qwb2" Feb 16 17:16:04 crc kubenswrapper[4870]: E0216 17:16:04.375057 4870 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 17:16:04 crc kubenswrapper[4870]: E0216 17:16:04.375117 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1cdf458-f970-4341-b2e9-f0752bf88a9c-memberlist podName:d1cdf458-f970-4341-b2e9-f0752bf88a9c nodeName:}" failed. No retries permitted until 2026-02-16 17:16:04.875101882 +0000 UTC m=+969.358566266 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d1cdf458-f970-4341-b2e9-f0752bf88a9c-memberlist") pod "speaker-fh77x" (UID: "d1cdf458-f970-4341-b2e9-f0752bf88a9c") : secret "metallb-memberlist" not found Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.375801 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d1cdf458-f970-4341-b2e9-f0752bf88a9c-metallb-excludel2\") pod \"speaker-fh77x\" (UID: \"d1cdf458-f970-4341-b2e9-f0752bf88a9c\") " pod="metallb-system/speaker-fh77x" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.375932 4870 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.380381 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b8e45f6-6719-4e08-832f-fb4074dc21b7-metrics-certs\") pod \"controller-69bbfbf88f-2qwb2\" (UID: \"6b8e45f6-6719-4e08-832f-fb4074dc21b7\") " pod="metallb-system/controller-69bbfbf88f-2qwb2" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.381360 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1cdf458-f970-4341-b2e9-f0752bf88a9c-metrics-certs\") pod \"speaker-fh77x\" (UID: \"d1cdf458-f970-4341-b2e9-f0752bf88a9c\") " pod="metallb-system/speaker-fh77x" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.389558 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b8e45f6-6719-4e08-832f-fb4074dc21b7-cert\") pod \"controller-69bbfbf88f-2qwb2\" (UID: \"6b8e45f6-6719-4e08-832f-fb4074dc21b7\") " pod="metallb-system/controller-69bbfbf88f-2qwb2" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.392185 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t8cq\" (UniqueName: \"kubernetes.io/projected/d1cdf458-f970-4341-b2e9-f0752bf88a9c-kube-api-access-8t8cq\") pod \"speaker-fh77x\" (UID: \"d1cdf458-f970-4341-b2e9-f0752bf88a9c\") " pod="metallb-system/speaker-fh77x" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.395762 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5r9q\" (UniqueName: \"kubernetes.io/projected/6b8e45f6-6719-4e08-832f-fb4074dc21b7-kube-api-access-b5r9q\") pod \"controller-69bbfbf88f-2qwb2\" (UID: \"6b8e45f6-6719-4e08-832f-fb4074dc21b7\") " pod="metallb-system/controller-69bbfbf88f-2qwb2" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.430808 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-2qwb2" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.463839 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qg9jr" event={"ID":"d51255e7-059d-4c69-9a01-f90249fe53bf","Type":"ContainerStarted","Data":"f171591a154cceb8c8838cbee79c787902fe8364213ce89a0334aa8dac761bdb"} Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.543296 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-sl5w2"] Feb 16 17:16:04 crc kubenswrapper[4870]: W0216 17:16:04.546771 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod601db539_ff99_4286_b333_89a76d744d27.slice/crio-53af89a109a4839e23c764db985278197189c433d753467d2d0d65f16635c126 WatchSource:0}: Error finding container 53af89a109a4839e23c764db985278197189c433d753467d2d0d65f16635c126: Status 404 returned error can't find the container with id 53af89a109a4839e23c764db985278197189c433d753467d2d0d65f16635c126 Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.772106 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lnfwq" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.772159 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lnfwq" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.819458 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lnfwq" Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.882736 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-2qwb2"] Feb 16 17:16:04 crc kubenswrapper[4870]: W0216 17:16:04.887700 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b8e45f6_6719_4e08_832f_fb4074dc21b7.slice/crio-14b7a2c431bacac67537cd06759c39e389c25bf0ea462f692d82c215efe7f52f WatchSource:0}: Error finding container 14b7a2c431bacac67537cd06759c39e389c25bf0ea462f692d82c215efe7f52f: Status 404 returned error can't find the container with id 14b7a2c431bacac67537cd06759c39e389c25bf0ea462f692d82c215efe7f52f Feb 16 17:16:04 crc kubenswrapper[4870]: I0216 17:16:04.891284 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d1cdf458-f970-4341-b2e9-f0752bf88a9c-memberlist\") pod \"speaker-fh77x\" (UID: \"d1cdf458-f970-4341-b2e9-f0752bf88a9c\") " pod="metallb-system/speaker-fh77x" Feb 16 17:16:04 crc kubenswrapper[4870]: E0216 17:16:04.891424 4870 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 17:16:04 crc kubenswrapper[4870]: E0216 17:16:04.891479 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1cdf458-f970-4341-b2e9-f0752bf88a9c-memberlist podName:d1cdf458-f970-4341-b2e9-f0752bf88a9c nodeName:}" failed. No retries permitted until 2026-02-16 17:16:05.891463442 +0000 UTC m=+970.374927826 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d1cdf458-f970-4341-b2e9-f0752bf88a9c-memberlist") pod "speaker-fh77x" (UID: "d1cdf458-f970-4341-b2e9-f0752bf88a9c") : secret "metallb-memberlist" not found Feb 16 17:16:05 crc kubenswrapper[4870]: I0216 17:16:05.473057 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-2qwb2" event={"ID":"6b8e45f6-6719-4e08-832f-fb4074dc21b7","Type":"ContainerStarted","Data":"44c5cc3e7a95b50d5cba7b9e3a65cd61e09deaa6936caeb33f219e33a702f435"} Feb 16 17:16:05 crc kubenswrapper[4870]: I0216 17:16:05.473382 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-2qwb2" Feb 16 17:16:05 crc kubenswrapper[4870]: I0216 17:16:05.473393 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-2qwb2" event={"ID":"6b8e45f6-6719-4e08-832f-fb4074dc21b7","Type":"ContainerStarted","Data":"a0ca671e934f069dfdbe65248613b02e22fbdc6c32f7bdb24efd9c17febe3135"} Feb 16 17:16:05 crc kubenswrapper[4870]: I0216 17:16:05.473402 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-2qwb2" event={"ID":"6b8e45f6-6719-4e08-832f-fb4074dc21b7","Type":"ContainerStarted","Data":"14b7a2c431bacac67537cd06759c39e389c25bf0ea462f692d82c215efe7f52f"} Feb 16 17:16:05 crc kubenswrapper[4870]: I0216 17:16:05.474536 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sl5w2" event={"ID":"601db539-ff99-4286-b333-89a76d744d27","Type":"ContainerStarted","Data":"53af89a109a4839e23c764db985278197189c433d753467d2d0d65f16635c126"} Feb 16 17:16:05 crc kubenswrapper[4870]: I0216 17:16:05.496409 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-2qwb2" podStartSLOduration=1.496382772 podStartE2EDuration="1.496382772s" podCreationTimestamp="2026-02-16 17:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:16:05.488227729 +0000 UTC m=+969.971692133" watchObservedRunningTime="2026-02-16 17:16:05.496382772 +0000 UTC m=+969.979847156" Feb 16 17:16:05 crc kubenswrapper[4870]: I0216 17:16:05.521644 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lnfwq" Feb 16 17:16:05 crc kubenswrapper[4870]: I0216 17:16:05.557850 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lnfwq"] Feb 16 17:16:05 crc kubenswrapper[4870]: I0216 17:16:05.903639 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d1cdf458-f970-4341-b2e9-f0752bf88a9c-memberlist\") pod \"speaker-fh77x\" (UID: \"d1cdf458-f970-4341-b2e9-f0752bf88a9c\") " pod="metallb-system/speaker-fh77x" Feb 16 17:16:05 crc kubenswrapper[4870]: I0216 17:16:05.921104 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d1cdf458-f970-4341-b2e9-f0752bf88a9c-memberlist\") pod \"speaker-fh77x\" (UID: \"d1cdf458-f970-4341-b2e9-f0752bf88a9c\") " pod="metallb-system/speaker-fh77x" Feb 16 17:16:05 crc kubenswrapper[4870]: I0216 17:16:05.945766 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fh77x" Feb 16 17:16:05 crc kubenswrapper[4870]: W0216 17:16:05.974897 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1cdf458_f970_4341_b2e9_f0752bf88a9c.slice/crio-f515920464a36d7d99acc103faa917c558795debaef3dd0192669a8a9c72500f WatchSource:0}: Error finding container f515920464a36d7d99acc103faa917c558795debaef3dd0192669a8a9c72500f: Status 404 returned error can't find the container with id f515920464a36d7d99acc103faa917c558795debaef3dd0192669a8a9c72500f Feb 16 17:16:06 crc kubenswrapper[4870]: I0216 17:16:06.498148 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fh77x" event={"ID":"d1cdf458-f970-4341-b2e9-f0752bf88a9c","Type":"ContainerStarted","Data":"b018330de4be5663e6ab0863c24b98afed8c23f0df41d437afd39dde32937afa"} Feb 16 17:16:06 crc kubenswrapper[4870]: I0216 17:16:06.498186 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fh77x" event={"ID":"d1cdf458-f970-4341-b2e9-f0752bf88a9c","Type":"ContainerStarted","Data":"ba4bf31d301434aec3cfecc876e1ca856603393742810d4ce5d613494b400ae1"} Feb 16 17:16:06 crc kubenswrapper[4870]: I0216 17:16:06.498195 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fh77x" event={"ID":"d1cdf458-f970-4341-b2e9-f0752bf88a9c","Type":"ContainerStarted","Data":"f515920464a36d7d99acc103faa917c558795debaef3dd0192669a8a9c72500f"} Feb 16 17:16:06 crc kubenswrapper[4870]: I0216 17:16:06.499019 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-fh77x" Feb 16 17:16:06 crc kubenswrapper[4870]: I0216 17:16:06.523105 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-fh77x" podStartSLOduration=2.52308877 podStartE2EDuration="2.52308877s" podCreationTimestamp="2026-02-16 17:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:16:06.518999493 +0000 UTC m=+971.002463877" watchObservedRunningTime="2026-02-16 17:16:06.52308877 +0000 UTC m=+971.006553154" Feb 16 17:16:07 crc kubenswrapper[4870]: I0216 17:16:07.504118 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lnfwq" podUID="e2a9ae14-e8f8-4907-ae7a-ec7280968b16" containerName="registry-server" containerID="cri-o://ee12dff3cd1aaab9410078112477f082cef87e57b2681883aa841b9ac9b71e52" gracePeriod=2 Feb 16 17:16:07 crc kubenswrapper[4870]: I0216 17:16:07.943661 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lnfwq" Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.056533 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2a9ae14-e8f8-4907-ae7a-ec7280968b16-utilities\") pod \"e2a9ae14-e8f8-4907-ae7a-ec7280968b16\" (UID: \"e2a9ae14-e8f8-4907-ae7a-ec7280968b16\") " Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.056851 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szvr2\" (UniqueName: \"kubernetes.io/projected/e2a9ae14-e8f8-4907-ae7a-ec7280968b16-kube-api-access-szvr2\") pod \"e2a9ae14-e8f8-4907-ae7a-ec7280968b16\" (UID: \"e2a9ae14-e8f8-4907-ae7a-ec7280968b16\") " Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.056893 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2a9ae14-e8f8-4907-ae7a-ec7280968b16-catalog-content\") pod \"e2a9ae14-e8f8-4907-ae7a-ec7280968b16\" (UID: \"e2a9ae14-e8f8-4907-ae7a-ec7280968b16\") " Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.057279 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a9ae14-e8f8-4907-ae7a-ec7280968b16-utilities" (OuterVolumeSpecName: "utilities") pod "e2a9ae14-e8f8-4907-ae7a-ec7280968b16" (UID: "e2a9ae14-e8f8-4907-ae7a-ec7280968b16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.063162 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a9ae14-e8f8-4907-ae7a-ec7280968b16-kube-api-access-szvr2" (OuterVolumeSpecName: "kube-api-access-szvr2") pod "e2a9ae14-e8f8-4907-ae7a-ec7280968b16" (UID: "e2a9ae14-e8f8-4907-ae7a-ec7280968b16"). InnerVolumeSpecName "kube-api-access-szvr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.102108 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a9ae14-e8f8-4907-ae7a-ec7280968b16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2a9ae14-e8f8-4907-ae7a-ec7280968b16" (UID: "e2a9ae14-e8f8-4907-ae7a-ec7280968b16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.160383 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2a9ae14-e8f8-4907-ae7a-ec7280968b16-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.160423 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szvr2\" (UniqueName: \"kubernetes.io/projected/e2a9ae14-e8f8-4907-ae7a-ec7280968b16-kube-api-access-szvr2\") on node \"crc\" DevicePath \"\"" Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.160439 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2a9ae14-e8f8-4907-ae7a-ec7280968b16-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.510764 4870 generic.go:334] "Generic (PLEG): container finished" podID="e2a9ae14-e8f8-4907-ae7a-ec7280968b16" containerID="ee12dff3cd1aaab9410078112477f082cef87e57b2681883aa841b9ac9b71e52" exitCode=0 Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.510801 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lnfwq" event={"ID":"e2a9ae14-e8f8-4907-ae7a-ec7280968b16","Type":"ContainerDied","Data":"ee12dff3cd1aaab9410078112477f082cef87e57b2681883aa841b9ac9b71e52"} Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.510824 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lnfwq" event={"ID":"e2a9ae14-e8f8-4907-ae7a-ec7280968b16","Type":"ContainerDied","Data":"e831de3225c4316e63074f1d67386c181e0742cbeee45f4433fb0c114dcea615"} Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.510839 4870 scope.go:117] "RemoveContainer" containerID="ee12dff3cd1aaab9410078112477f082cef87e57b2681883aa841b9ac9b71e52" Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.510936 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lnfwq" Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.545072 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lnfwq"] Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.549286 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lnfwq"] Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.566043 4870 scope.go:117] "RemoveContainer" containerID="2edec9f0a411590de2f739b15dac9cdeed1428868d2f63d695dc7a7772d76acb" Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.608278 4870 scope.go:117] "RemoveContainer" containerID="ceb797467672dc912fde63fc1a86f2d1455e156a2ebf5db47d717ce2f0fdc82d" Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.639177 4870 scope.go:117] "RemoveContainer" containerID="ee12dff3cd1aaab9410078112477f082cef87e57b2681883aa841b9ac9b71e52" Feb 16 17:16:08 crc kubenswrapper[4870]: E0216 17:16:08.643348 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee12dff3cd1aaab9410078112477f082cef87e57b2681883aa841b9ac9b71e52\": container with ID starting with ee12dff3cd1aaab9410078112477f082cef87e57b2681883aa841b9ac9b71e52 not found: ID does not exist" containerID="ee12dff3cd1aaab9410078112477f082cef87e57b2681883aa841b9ac9b71e52" Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.643388 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee12dff3cd1aaab9410078112477f082cef87e57b2681883aa841b9ac9b71e52"} err="failed to get container status \"ee12dff3cd1aaab9410078112477f082cef87e57b2681883aa841b9ac9b71e52\": rpc error: code = NotFound desc = could not find container \"ee12dff3cd1aaab9410078112477f082cef87e57b2681883aa841b9ac9b71e52\": container with ID starting with ee12dff3cd1aaab9410078112477f082cef87e57b2681883aa841b9ac9b71e52 not found: ID does not exist" Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.643413 4870 scope.go:117] "RemoveContainer" containerID="2edec9f0a411590de2f739b15dac9cdeed1428868d2f63d695dc7a7772d76acb" Feb 16 17:16:08 crc kubenswrapper[4870]: E0216 17:16:08.643806 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2edec9f0a411590de2f739b15dac9cdeed1428868d2f63d695dc7a7772d76acb\": container with ID starting with 2edec9f0a411590de2f739b15dac9cdeed1428868d2f63d695dc7a7772d76acb not found: ID does not exist" containerID="2edec9f0a411590de2f739b15dac9cdeed1428868d2f63d695dc7a7772d76acb" Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.643856 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2edec9f0a411590de2f739b15dac9cdeed1428868d2f63d695dc7a7772d76acb"} err="failed to get container status \"2edec9f0a411590de2f739b15dac9cdeed1428868d2f63d695dc7a7772d76acb\": rpc error: code = NotFound desc = could not find container \"2edec9f0a411590de2f739b15dac9cdeed1428868d2f63d695dc7a7772d76acb\": container with ID starting with 2edec9f0a411590de2f739b15dac9cdeed1428868d2f63d695dc7a7772d76acb not found: ID does not exist" Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.643891 4870 scope.go:117] "RemoveContainer" containerID="ceb797467672dc912fde63fc1a86f2d1455e156a2ebf5db47d717ce2f0fdc82d" Feb 16 17:16:08 crc kubenswrapper[4870]: E0216 17:16:08.644231 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceb797467672dc912fde63fc1a86f2d1455e156a2ebf5db47d717ce2f0fdc82d\": container with ID starting with ceb797467672dc912fde63fc1a86f2d1455e156a2ebf5db47d717ce2f0fdc82d not found: ID does not exist" containerID="ceb797467672dc912fde63fc1a86f2d1455e156a2ebf5db47d717ce2f0fdc82d" Feb 16 17:16:08 crc kubenswrapper[4870]: I0216 17:16:08.644258 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceb797467672dc912fde63fc1a86f2d1455e156a2ebf5db47d717ce2f0fdc82d"} err="failed to get container status \"ceb797467672dc912fde63fc1a86f2d1455e156a2ebf5db47d717ce2f0fdc82d\": rpc error: code = NotFound desc = could not find container \"ceb797467672dc912fde63fc1a86f2d1455e156a2ebf5db47d717ce2f0fdc82d\": container with ID starting with ceb797467672dc912fde63fc1a86f2d1455e156a2ebf5db47d717ce2f0fdc82d not found: ID does not exist" Feb 16 17:16:10 crc kubenswrapper[4870]: I0216 17:16:10.232935 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2a9ae14-e8f8-4907-ae7a-ec7280968b16" path="/var/lib/kubelet/pods/e2a9ae14-e8f8-4907-ae7a-ec7280968b16/volumes" Feb 16 17:16:12 crc kubenswrapper[4870]: I0216 17:16:12.538591 4870 generic.go:334] "Generic (PLEG): container finished" podID="d51255e7-059d-4c69-9a01-f90249fe53bf" containerID="597d67f87c61d431dacda6a2161b5ba805177b7951499bbdf9f81ac9b8127799" exitCode=0 Feb 16 17:16:12 crc kubenswrapper[4870]: I0216 17:16:12.538664 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qg9jr" event={"ID":"d51255e7-059d-4c69-9a01-f90249fe53bf","Type":"ContainerDied","Data":"597d67f87c61d431dacda6a2161b5ba805177b7951499bbdf9f81ac9b8127799"} Feb 16 17:16:12 crc kubenswrapper[4870]: I0216 17:16:12.541141 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sl5w2" event={"ID":"601db539-ff99-4286-b333-89a76d744d27","Type":"ContainerStarted","Data":"1a4c08dad40a77d194bbe4a6de02eb95085013e3281fdabf3c75cf9584588448"} Feb 16 17:16:12 crc kubenswrapper[4870]: I0216 17:16:12.541328 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sl5w2" Feb 16 17:16:12 crc kubenswrapper[4870]: I0216 17:16:12.590489 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sl5w2" podStartSLOduration=2.348598646 podStartE2EDuration="9.590466344s" podCreationTimestamp="2026-02-16 17:16:03 +0000 UTC" firstStartedPulling="2026-02-16 17:16:04.549040151 +0000 UTC m=+969.032504535" lastFinishedPulling="2026-02-16 17:16:11.790907839 +0000 UTC m=+976.274372233" observedRunningTime="2026-02-16 17:16:12.586339448 +0000 UTC m=+977.069803852" watchObservedRunningTime="2026-02-16 17:16:12.590466344 +0000 UTC m=+977.073930738" Feb 16 17:16:13 crc kubenswrapper[4870]: I0216 17:16:13.549263 4870 generic.go:334] "Generic (PLEG): container finished" podID="d51255e7-059d-4c69-9a01-f90249fe53bf" containerID="47cefa93664e02379f436cfc07cf2b1880653706ca9d54ffea0657f8087bcb09" exitCode=0 Feb 16 17:16:13 crc kubenswrapper[4870]: I0216 17:16:13.551374 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qg9jr" event={"ID":"d51255e7-059d-4c69-9a01-f90249fe53bf","Type":"ContainerDied","Data":"47cefa93664e02379f436cfc07cf2b1880653706ca9d54ffea0657f8087bcb09"} Feb 16 17:16:14 crc kubenswrapper[4870]: I0216 17:16:14.435819 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-2qwb2" Feb 16 17:16:14 crc kubenswrapper[4870]: I0216 17:16:14.559121 4870 generic.go:334] "Generic (PLEG): container finished" podID="d51255e7-059d-4c69-9a01-f90249fe53bf" containerID="9d738c858aafc18da964f9175ef1fa07d515f95633b63a8281b23f4ddfc7ea7d" exitCode=0 Feb 16 17:16:14 crc kubenswrapper[4870]: I0216 17:16:14.559339 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qg9jr" event={"ID":"d51255e7-059d-4c69-9a01-f90249fe53bf","Type":"ContainerDied","Data":"9d738c858aafc18da964f9175ef1fa07d515f95633b63a8281b23f4ddfc7ea7d"} Feb 16 17:16:15 crc kubenswrapper[4870]: I0216 17:16:15.568873 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qg9jr" event={"ID":"d51255e7-059d-4c69-9a01-f90249fe53bf","Type":"ContainerStarted","Data":"15a94d4241a0725b3eea9ce5ffd87e62cc6a6a39ebe536a185fcc7f7d91f1646"} Feb 16 17:16:15 crc kubenswrapper[4870]: I0216 17:16:15.568923 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qg9jr" event={"ID":"d51255e7-059d-4c69-9a01-f90249fe53bf","Type":"ContainerStarted","Data":"2e36b76a3189d217fb8fe8f30a4f4ded7f162066dce83221b52efa32810ae753"} Feb 16 17:16:15 crc kubenswrapper[4870]: I0216 17:16:15.568934 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qg9jr" event={"ID":"d51255e7-059d-4c69-9a01-f90249fe53bf","Type":"ContainerStarted","Data":"7415e169bb988d86ce32b9a12f0c162d10b0efc93487c407357329c2a95be949"} Feb 16 17:16:15 crc kubenswrapper[4870]: I0216 17:16:15.568962 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qg9jr" event={"ID":"d51255e7-059d-4c69-9a01-f90249fe53bf","Type":"ContainerStarted","Data":"fe487eecff19f17a35600dec08e827244655d7587684e06dd6f722096b90eaa7"} Feb 16 17:16:15 crc kubenswrapper[4870]: I0216 17:16:15.568972 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qg9jr" event={"ID":"d51255e7-059d-4c69-9a01-f90249fe53bf","Type":"ContainerStarted","Data":"3fc8a9daa501f338a9b2af3fd9d5d2920d722e89356e9dc2c38a96fcd772b2dc"} Feb 16 17:16:16 crc kubenswrapper[4870]: I0216 17:16:16.580961 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qg9jr" event={"ID":"d51255e7-059d-4c69-9a01-f90249fe53bf","Type":"ContainerStarted","Data":"3ec22d9133d4b42e73583b90e5ed7625ebaafedc7c680fb8e1f80b751b52d843"} Feb 16 17:16:16 crc kubenswrapper[4870]: I0216 17:16:16.581285 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:19 crc kubenswrapper[4870]: I0216 17:16:19.278131 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:19 crc kubenswrapper[4870]: I0216 17:16:19.327413 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:19 crc kubenswrapper[4870]: I0216 17:16:19.352074 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-qg9jr" podStartSLOduration=8.999989924 podStartE2EDuration="16.352050727s" podCreationTimestamp="2026-02-16 17:16:03 +0000 UTC" firstStartedPulling="2026-02-16 17:16:04.422933368 +0000 UTC m=+968.906397752" lastFinishedPulling="2026-02-16 17:16:11.774994171 +0000 UTC m=+976.258458555" observedRunningTime="2026-02-16 17:16:16.60819194 +0000 UTC m=+981.091656334" watchObservedRunningTime="2026-02-16 17:16:19.352050727 +0000 UTC m=+983.835515121" Feb 16 17:16:24 crc kubenswrapper[4870]: I0216 17:16:24.281200 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-qg9jr" Feb 16 17:16:24 crc kubenswrapper[4870]: I0216 17:16:24.302163 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sl5w2" Feb 16 17:16:25 crc kubenswrapper[4870]: I0216 17:16:25.949651 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-fh77x" Feb 16 17:16:29 crc kubenswrapper[4870]: I0216 17:16:29.141013 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-r4f8d"] Feb 16 17:16:29 crc kubenswrapper[4870]: E0216 17:16:29.141626 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a9ae14-e8f8-4907-ae7a-ec7280968b16" containerName="extract-utilities" Feb 16 17:16:29 crc kubenswrapper[4870]: I0216 17:16:29.141642 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a9ae14-e8f8-4907-ae7a-ec7280968b16" containerName="extract-utilities" Feb 16 17:16:29 crc kubenswrapper[4870]: E0216 17:16:29.141653 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a9ae14-e8f8-4907-ae7a-ec7280968b16" containerName="registry-server" Feb 16 17:16:29 crc kubenswrapper[4870]: I0216 17:16:29.141660 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a9ae14-e8f8-4907-ae7a-ec7280968b16" containerName="registry-server" Feb 16 17:16:29 crc kubenswrapper[4870]: E0216 17:16:29.141682 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a9ae14-e8f8-4907-ae7a-ec7280968b16" containerName="extract-content" Feb 16 17:16:29 crc kubenswrapper[4870]: I0216 17:16:29.141689 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a9ae14-e8f8-4907-ae7a-ec7280968b16" containerName="extract-content" Feb 16 17:16:29 crc kubenswrapper[4870]: I0216 17:16:29.141805 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a9ae14-e8f8-4907-ae7a-ec7280968b16" containerName="registry-server" Feb 16 17:16:29 crc kubenswrapper[4870]: I0216 17:16:29.142338 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r4f8d" Feb 16 17:16:29 crc kubenswrapper[4870]: I0216 17:16:29.147733 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 16 17:16:29 crc kubenswrapper[4870]: I0216 17:16:29.147899 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 16 17:16:29 crc kubenswrapper[4870]: I0216 17:16:29.147913 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-m9qmc" Feb 16 17:16:29 crc kubenswrapper[4870]: I0216 17:16:29.153452 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-r4f8d"] Feb 16 17:16:29 crc kubenswrapper[4870]: I0216 17:16:29.281924 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6g56\" (UniqueName: \"kubernetes.io/projected/6bcf1396-98fc-49c5-9c14-ee1a9ded83f6-kube-api-access-c6g56\") pod \"openstack-operator-index-r4f8d\" (UID: \"6bcf1396-98fc-49c5-9c14-ee1a9ded83f6\") " pod="openstack-operators/openstack-operator-index-r4f8d" Feb 16 17:16:29 crc kubenswrapper[4870]: I0216 17:16:29.382832 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6g56\" (UniqueName: \"kubernetes.io/projected/6bcf1396-98fc-49c5-9c14-ee1a9ded83f6-kube-api-access-c6g56\") pod \"openstack-operator-index-r4f8d\" (UID: \"6bcf1396-98fc-49c5-9c14-ee1a9ded83f6\") " pod="openstack-operators/openstack-operator-index-r4f8d" Feb 16 17:16:29 crc kubenswrapper[4870]: I0216 17:16:29.403115 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6g56\" (UniqueName: \"kubernetes.io/projected/6bcf1396-98fc-49c5-9c14-ee1a9ded83f6-kube-api-access-c6g56\") pod \"openstack-operator-index-r4f8d\" (UID: \"6bcf1396-98fc-49c5-9c14-ee1a9ded83f6\") " pod="openstack-operators/openstack-operator-index-r4f8d" Feb 16 17:16:29 crc kubenswrapper[4870]: I0216 17:16:29.470048 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r4f8d" Feb 16 17:16:29 crc kubenswrapper[4870]: I0216 17:16:29.888909 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-r4f8d"] Feb 16 17:16:29 crc kubenswrapper[4870]: W0216 17:16:29.892661 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bcf1396_98fc_49c5_9c14_ee1a9ded83f6.slice/crio-0a385b8cdd812934a8f1a500085733015974feac1191605c5e8d85369593fb28 WatchSource:0}: Error finding container 0a385b8cdd812934a8f1a500085733015974feac1191605c5e8d85369593fb28: Status 404 returned error can't find the container with id 0a385b8cdd812934a8f1a500085733015974feac1191605c5e8d85369593fb28 Feb 16 17:16:30 crc kubenswrapper[4870]: I0216 17:16:30.677084 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r4f8d" event={"ID":"6bcf1396-98fc-49c5-9c14-ee1a9ded83f6","Type":"ContainerStarted","Data":"0a385b8cdd812934a8f1a500085733015974feac1191605c5e8d85369593fb28"} Feb 16 17:16:31 crc kubenswrapper[4870]: I0216 17:16:31.932095 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-r4f8d"] Feb 16 17:16:32 crc kubenswrapper[4870]: I0216 17:16:32.529361 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8v8hn"] Feb 16 17:16:32 crc kubenswrapper[4870]: I0216 17:16:32.530745 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8v8hn" Feb 16 17:16:32 crc kubenswrapper[4870]: I0216 17:16:32.542676 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8v8hn"] Feb 16 17:16:32 crc kubenswrapper[4870]: I0216 17:16:32.700322 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r4f8d" event={"ID":"6bcf1396-98fc-49c5-9c14-ee1a9ded83f6","Type":"ContainerStarted","Data":"0a8eaac76d05bf2caa218ad101deb743a98810773059beb9db0ca9ad1571dcaa"} Feb 16 17:16:32 crc kubenswrapper[4870]: I0216 17:16:32.700577 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-r4f8d" podUID="6bcf1396-98fc-49c5-9c14-ee1a9ded83f6" containerName="registry-server" containerID="cri-o://0a8eaac76d05bf2caa218ad101deb743a98810773059beb9db0ca9ad1571dcaa" gracePeriod=2 Feb 16 17:16:32 crc kubenswrapper[4870]: I0216 17:16:32.716359 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-r4f8d" podStartSLOduration=1.2007905700000001 podStartE2EDuration="3.716337463s" podCreationTimestamp="2026-02-16 17:16:29 +0000 UTC" firstStartedPulling="2026-02-16 17:16:29.8951046 +0000 UTC m=+994.378568994" lastFinishedPulling="2026-02-16 17:16:32.410651503 +0000 UTC m=+996.894115887" observedRunningTime="2026-02-16 17:16:32.715108219 +0000 UTC m=+997.198572603" watchObservedRunningTime="2026-02-16 17:16:32.716337463 +0000 UTC m=+997.199801847" Feb 16 17:16:32 crc kubenswrapper[4870]: I0216 17:16:32.731023 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7xnq\" (UniqueName: \"kubernetes.io/projected/8bc858a0-aec4-4366-85ea-d046f8d8464e-kube-api-access-z7xnq\") pod \"openstack-operator-index-8v8hn\" (UID: \"8bc858a0-aec4-4366-85ea-d046f8d8464e\") " pod="openstack-operators/openstack-operator-index-8v8hn" Feb 16 17:16:32 crc kubenswrapper[4870]: I0216 17:16:32.832994 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7xnq\" (UniqueName: \"kubernetes.io/projected/8bc858a0-aec4-4366-85ea-d046f8d8464e-kube-api-access-z7xnq\") pod \"openstack-operator-index-8v8hn\" (UID: \"8bc858a0-aec4-4366-85ea-d046f8d8464e\") " pod="openstack-operators/openstack-operator-index-8v8hn" Feb 16 17:16:32 crc kubenswrapper[4870]: I0216 17:16:32.859646 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7xnq\" (UniqueName: \"kubernetes.io/projected/8bc858a0-aec4-4366-85ea-d046f8d8464e-kube-api-access-z7xnq\") pod \"openstack-operator-index-8v8hn\" (UID: \"8bc858a0-aec4-4366-85ea-d046f8d8464e\") " pod="openstack-operators/openstack-operator-index-8v8hn" Feb 16 17:16:33 crc kubenswrapper[4870]: I0216 17:16:33.151557 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8v8hn" Feb 16 17:16:33 crc kubenswrapper[4870]: I0216 17:16:33.188325 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r4f8d" Feb 16 17:16:33 crc kubenswrapper[4870]: I0216 17:16:33.255071 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6g56\" (UniqueName: \"kubernetes.io/projected/6bcf1396-98fc-49c5-9c14-ee1a9ded83f6-kube-api-access-c6g56\") pod \"6bcf1396-98fc-49c5-9c14-ee1a9ded83f6\" (UID: \"6bcf1396-98fc-49c5-9c14-ee1a9ded83f6\") " Feb 16 17:16:33 crc kubenswrapper[4870]: I0216 17:16:33.262485 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bcf1396-98fc-49c5-9c14-ee1a9ded83f6-kube-api-access-c6g56" (OuterVolumeSpecName: "kube-api-access-c6g56") pod "6bcf1396-98fc-49c5-9c14-ee1a9ded83f6" (UID: "6bcf1396-98fc-49c5-9c14-ee1a9ded83f6"). InnerVolumeSpecName "kube-api-access-c6g56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:16:33 crc kubenswrapper[4870]: I0216 17:16:33.356735 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6g56\" (UniqueName: \"kubernetes.io/projected/6bcf1396-98fc-49c5-9c14-ee1a9ded83f6-kube-api-access-c6g56\") on node \"crc\" DevicePath \"\"" Feb 16 17:16:33 crc kubenswrapper[4870]: I0216 17:16:33.586297 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8v8hn"] Feb 16 17:16:33 crc kubenswrapper[4870]: W0216 17:16:33.596155 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bc858a0_aec4_4366_85ea_d046f8d8464e.slice/crio-dba11d2b59163a8457ffb357b8b85ce5f154c55eb36dfc3bc904b19da04db2f3 WatchSource:0}: Error finding container dba11d2b59163a8457ffb357b8b85ce5f154c55eb36dfc3bc904b19da04db2f3: Status 404 returned error can't find the container with id dba11d2b59163a8457ffb357b8b85ce5f154c55eb36dfc3bc904b19da04db2f3 Feb 16 17:16:33 crc kubenswrapper[4870]: I0216 17:16:33.708571 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8v8hn" event={"ID":"8bc858a0-aec4-4366-85ea-d046f8d8464e","Type":"ContainerStarted","Data":"dba11d2b59163a8457ffb357b8b85ce5f154c55eb36dfc3bc904b19da04db2f3"} Feb 16 17:16:33 crc kubenswrapper[4870]: I0216 17:16:33.711048 4870 generic.go:334] "Generic (PLEG): container finished" podID="6bcf1396-98fc-49c5-9c14-ee1a9ded83f6" containerID="0a8eaac76d05bf2caa218ad101deb743a98810773059beb9db0ca9ad1571dcaa" exitCode=0 Feb 16 17:16:33 crc kubenswrapper[4870]: I0216 17:16:33.711095 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r4f8d" event={"ID":"6bcf1396-98fc-49c5-9c14-ee1a9ded83f6","Type":"ContainerDied","Data":"0a8eaac76d05bf2caa218ad101deb743a98810773059beb9db0ca9ad1571dcaa"} Feb 16 17:16:33 crc kubenswrapper[4870]: I0216 17:16:33.711110 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r4f8d" Feb 16 17:16:33 crc kubenswrapper[4870]: I0216 17:16:33.711134 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r4f8d" event={"ID":"6bcf1396-98fc-49c5-9c14-ee1a9ded83f6","Type":"ContainerDied","Data":"0a385b8cdd812934a8f1a500085733015974feac1191605c5e8d85369593fb28"} Feb 16 17:16:33 crc kubenswrapper[4870]: I0216 17:16:33.711153 4870 scope.go:117] "RemoveContainer" containerID="0a8eaac76d05bf2caa218ad101deb743a98810773059beb9db0ca9ad1571dcaa" Feb 16 17:16:33 crc kubenswrapper[4870]: I0216 17:16:33.751693 4870 scope.go:117] "RemoveContainer" containerID="0a8eaac76d05bf2caa218ad101deb743a98810773059beb9db0ca9ad1571dcaa" Feb 16 17:16:33 crc kubenswrapper[4870]: E0216 17:16:33.752393 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a8eaac76d05bf2caa218ad101deb743a98810773059beb9db0ca9ad1571dcaa\": container with ID starting with 0a8eaac76d05bf2caa218ad101deb743a98810773059beb9db0ca9ad1571dcaa not found: ID does not exist" containerID="0a8eaac76d05bf2caa218ad101deb743a98810773059beb9db0ca9ad1571dcaa" Feb 16 17:16:33 crc kubenswrapper[4870]: I0216 17:16:33.752426 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a8eaac76d05bf2caa218ad101deb743a98810773059beb9db0ca9ad1571dcaa"} err="failed to get container status \"0a8eaac76d05bf2caa218ad101deb743a98810773059beb9db0ca9ad1571dcaa\": rpc error: code = NotFound desc = could not find container \"0a8eaac76d05bf2caa218ad101deb743a98810773059beb9db0ca9ad1571dcaa\": container with ID starting with 0a8eaac76d05bf2caa218ad101deb743a98810773059beb9db0ca9ad1571dcaa not found: ID does not exist" Feb 16 17:16:33 crc kubenswrapper[4870]: I0216 17:16:33.799050 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-r4f8d"] Feb 16 17:16:33 crc kubenswrapper[4870]: I0216 17:16:33.803661 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-r4f8d"] Feb 16 17:16:34 crc kubenswrapper[4870]: I0216 17:16:34.230605 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bcf1396-98fc-49c5-9c14-ee1a9ded83f6" path="/var/lib/kubelet/pods/6bcf1396-98fc-49c5-9c14-ee1a9ded83f6/volumes" Feb 16 17:16:34 crc kubenswrapper[4870]: I0216 17:16:34.724939 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8v8hn" event={"ID":"8bc858a0-aec4-4366-85ea-d046f8d8464e","Type":"ContainerStarted","Data":"4485454115b5195eee8a7fe5af6be8d8d63480b12d42236365ab5fe104bc08ce"} Feb 16 17:16:34 crc kubenswrapper[4870]: I0216 17:16:34.746330 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8v8hn" podStartSLOduration=2.673460725 podStartE2EDuration="2.746308575s" podCreationTimestamp="2026-02-16 17:16:32 +0000 UTC" firstStartedPulling="2026-02-16 17:16:33.612877087 +0000 UTC m=+998.096341511" lastFinishedPulling="2026-02-16 17:16:33.685724967 +0000 UTC m=+998.169189361" observedRunningTime="2026-02-16 17:16:34.740517902 +0000 UTC m=+999.223982286" watchObservedRunningTime="2026-02-16 17:16:34.746308575 +0000 UTC m=+999.229772969" Feb 16 17:16:43 crc kubenswrapper[4870]: I0216 17:16:43.152196 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-8v8hn" Feb 16 17:16:43 crc kubenswrapper[4870]: I0216 17:16:43.152670 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-8v8hn" Feb 16 17:16:43 crc kubenswrapper[4870]: I0216 17:16:43.191174 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-8v8hn" Feb 16 17:16:43 crc kubenswrapper[4870]: I0216 17:16:43.822419 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-8v8hn" Feb 16 17:16:51 crc kubenswrapper[4870]: I0216 17:16:51.909820 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw"] Feb 16 17:16:51 crc kubenswrapper[4870]: E0216 17:16:51.910766 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bcf1396-98fc-49c5-9c14-ee1a9ded83f6" containerName="registry-server" Feb 16 17:16:51 crc kubenswrapper[4870]: I0216 17:16:51.910785 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bcf1396-98fc-49c5-9c14-ee1a9ded83f6" containerName="registry-server" Feb 16 17:16:51 crc kubenswrapper[4870]: I0216 17:16:51.910982 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bcf1396-98fc-49c5-9c14-ee1a9ded83f6" containerName="registry-server" Feb 16 17:16:51 crc kubenswrapper[4870]: I0216 17:16:51.912020 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw" Feb 16 17:16:51 crc kubenswrapper[4870]: I0216 17:16:51.916801 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-56zwn" Feb 16 17:16:51 crc kubenswrapper[4870]: I0216 17:16:51.930314 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw"] Feb 16 17:16:52 crc kubenswrapper[4870]: I0216 17:16:52.035586 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60cefeeb-704a-4af2-9df4-f497a9d77e64-util\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw\" (UID: \"60cefeeb-704a-4af2-9df4-f497a9d77e64\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw" Feb 16 17:16:52 crc kubenswrapper[4870]: I0216 17:16:52.035678 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkxfl\" (UniqueName: \"kubernetes.io/projected/60cefeeb-704a-4af2-9df4-f497a9d77e64-kube-api-access-gkxfl\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw\" (UID: \"60cefeeb-704a-4af2-9df4-f497a9d77e64\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw" Feb 16 17:16:52 crc kubenswrapper[4870]: I0216 17:16:52.035720 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60cefeeb-704a-4af2-9df4-f497a9d77e64-bundle\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw\" (UID: \"60cefeeb-704a-4af2-9df4-f497a9d77e64\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw" Feb 16 17:16:52 crc kubenswrapper[4870]: I0216 17:16:52.137583 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60cefeeb-704a-4af2-9df4-f497a9d77e64-util\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw\" (UID: \"60cefeeb-704a-4af2-9df4-f497a9d77e64\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw" Feb 16 17:16:52 crc kubenswrapper[4870]: I0216 17:16:52.138059 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkxfl\" (UniqueName: \"kubernetes.io/projected/60cefeeb-704a-4af2-9df4-f497a9d77e64-kube-api-access-gkxfl\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw\" (UID: \"60cefeeb-704a-4af2-9df4-f497a9d77e64\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw" Feb 16 17:16:52 crc kubenswrapper[4870]: I0216 17:16:52.138330 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60cefeeb-704a-4af2-9df4-f497a9d77e64-bundle\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw\" (UID: \"60cefeeb-704a-4af2-9df4-f497a9d77e64\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw" Feb 16 17:16:52 crc kubenswrapper[4870]: I0216 17:16:52.138391 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60cefeeb-704a-4af2-9df4-f497a9d77e64-util\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw\" (UID: \"60cefeeb-704a-4af2-9df4-f497a9d77e64\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw" Feb 16 17:16:52 crc kubenswrapper[4870]: I0216 17:16:52.138915 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60cefeeb-704a-4af2-9df4-f497a9d77e64-bundle\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw\" (UID: \"60cefeeb-704a-4af2-9df4-f497a9d77e64\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw" Feb 16 17:16:52 crc kubenswrapper[4870]: I0216 17:16:52.160840 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkxfl\" (UniqueName: \"kubernetes.io/projected/60cefeeb-704a-4af2-9df4-f497a9d77e64-kube-api-access-gkxfl\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw\" (UID: \"60cefeeb-704a-4af2-9df4-f497a9d77e64\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw" Feb 16 17:16:52 crc kubenswrapper[4870]: I0216 17:16:52.231042 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw" Feb 16 17:16:52 crc kubenswrapper[4870]: I0216 17:16:52.652754 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw"] Feb 16 17:16:52 crc kubenswrapper[4870]: I0216 17:16:52.862277 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw" event={"ID":"60cefeeb-704a-4af2-9df4-f497a9d77e64","Type":"ContainerStarted","Data":"e425da7612186aa1fbf7bc5b9b9847e898f3329719b78c752d2330909a57f209"} Feb 16 17:16:52 crc kubenswrapper[4870]: I0216 17:16:52.862341 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw" event={"ID":"60cefeeb-704a-4af2-9df4-f497a9d77e64","Type":"ContainerStarted","Data":"359357338cfdcf69f6d84bd7d6e72e144b4997d8cff47f34e2f421ab6a2fb47c"} Feb 16 17:16:53 crc kubenswrapper[4870]: I0216 17:16:53.873310 4870 generic.go:334] "Generic (PLEG): container finished" podID="60cefeeb-704a-4af2-9df4-f497a9d77e64" containerID="e425da7612186aa1fbf7bc5b9b9847e898f3329719b78c752d2330909a57f209" exitCode=0 Feb 16 17:16:53 crc kubenswrapper[4870]: I0216 17:16:53.873446 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw" event={"ID":"60cefeeb-704a-4af2-9df4-f497a9d77e64","Type":"ContainerDied","Data":"e425da7612186aa1fbf7bc5b9b9847e898f3329719b78c752d2330909a57f209"} Feb 16 17:16:53 crc kubenswrapper[4870]: I0216 17:16:53.876890 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:16:54 crc kubenswrapper[4870]: I0216 17:16:54.885202 4870 generic.go:334] "Generic (PLEG): container finished" podID="60cefeeb-704a-4af2-9df4-f497a9d77e64" containerID="ee039362fba17751d58f1e03b5273f5a601dcbe9300805933a6bdbb7b74b5a9f" exitCode=0 Feb 16 17:16:54 crc kubenswrapper[4870]: I0216 17:16:54.885270 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw" event={"ID":"60cefeeb-704a-4af2-9df4-f497a9d77e64","Type":"ContainerDied","Data":"ee039362fba17751d58f1e03b5273f5a601dcbe9300805933a6bdbb7b74b5a9f"} Feb 16 17:16:55 crc kubenswrapper[4870]: I0216 17:16:55.896800 4870 generic.go:334] "Generic (PLEG): container finished" podID="60cefeeb-704a-4af2-9df4-f497a9d77e64" containerID="abd85813d619d4ca9d346fc51529480c84104685b663f1e60b7079195f04a12f" exitCode=0 Feb 16 17:16:55 crc kubenswrapper[4870]: I0216 17:16:55.896844 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw" event={"ID":"60cefeeb-704a-4af2-9df4-f497a9d77e64","Type":"ContainerDied","Data":"abd85813d619d4ca9d346fc51529480c84104685b663f1e60b7079195f04a12f"} Feb 16 17:16:57 crc kubenswrapper[4870]: I0216 17:16:57.277108 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw" Feb 16 17:16:57 crc kubenswrapper[4870]: I0216 17:16:57.419543 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkxfl\" (UniqueName: \"kubernetes.io/projected/60cefeeb-704a-4af2-9df4-f497a9d77e64-kube-api-access-gkxfl\") pod \"60cefeeb-704a-4af2-9df4-f497a9d77e64\" (UID: \"60cefeeb-704a-4af2-9df4-f497a9d77e64\") " Feb 16 17:16:57 crc kubenswrapper[4870]: I0216 17:16:57.419666 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60cefeeb-704a-4af2-9df4-f497a9d77e64-util\") pod \"60cefeeb-704a-4af2-9df4-f497a9d77e64\" (UID: \"60cefeeb-704a-4af2-9df4-f497a9d77e64\") " Feb 16 17:16:57 crc kubenswrapper[4870]: I0216 17:16:57.419717 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60cefeeb-704a-4af2-9df4-f497a9d77e64-bundle\") pod \"60cefeeb-704a-4af2-9df4-f497a9d77e64\" (UID: \"60cefeeb-704a-4af2-9df4-f497a9d77e64\") " Feb 16 17:16:57 crc kubenswrapper[4870]: I0216 17:16:57.420880 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60cefeeb-704a-4af2-9df4-f497a9d77e64-bundle" (OuterVolumeSpecName: "bundle") pod "60cefeeb-704a-4af2-9df4-f497a9d77e64" (UID: "60cefeeb-704a-4af2-9df4-f497a9d77e64"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:16:57 crc kubenswrapper[4870]: I0216 17:16:57.428323 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60cefeeb-704a-4af2-9df4-f497a9d77e64-kube-api-access-gkxfl" (OuterVolumeSpecName: "kube-api-access-gkxfl") pod "60cefeeb-704a-4af2-9df4-f497a9d77e64" (UID: "60cefeeb-704a-4af2-9df4-f497a9d77e64"). InnerVolumeSpecName "kube-api-access-gkxfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:16:57 crc kubenswrapper[4870]: I0216 17:16:57.438187 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60cefeeb-704a-4af2-9df4-f497a9d77e64-util" (OuterVolumeSpecName: "util") pod "60cefeeb-704a-4af2-9df4-f497a9d77e64" (UID: "60cefeeb-704a-4af2-9df4-f497a9d77e64"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:16:57 crc kubenswrapper[4870]: I0216 17:16:57.521525 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkxfl\" (UniqueName: \"kubernetes.io/projected/60cefeeb-704a-4af2-9df4-f497a9d77e64-kube-api-access-gkxfl\") on node \"crc\" DevicePath \"\"" Feb 16 17:16:57 crc kubenswrapper[4870]: I0216 17:16:57.521569 4870 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60cefeeb-704a-4af2-9df4-f497a9d77e64-util\") on node \"crc\" DevicePath \"\"" Feb 16 17:16:57 crc kubenswrapper[4870]: I0216 17:16:57.521585 4870 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60cefeeb-704a-4af2-9df4-f497a9d77e64-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:16:57 crc kubenswrapper[4870]: I0216 17:16:57.921898 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw" event={"ID":"60cefeeb-704a-4af2-9df4-f497a9d77e64","Type":"ContainerDied","Data":"359357338cfdcf69f6d84bd7d6e72e144b4997d8cff47f34e2f421ab6a2fb47c"} Feb 16 17:16:57 crc kubenswrapper[4870]: I0216 17:16:57.922011 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw" Feb 16 17:16:57 crc kubenswrapper[4870]: I0216 17:16:57.922021 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="359357338cfdcf69f6d84bd7d6e72e144b4997d8cff47f34e2f421ab6a2fb47c" Feb 16 17:17:04 crc kubenswrapper[4870]: I0216 17:17:04.199396 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6f655b9d6d-rjqbs"] Feb 16 17:17:04 crc kubenswrapper[4870]: E0216 17:17:04.199881 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60cefeeb-704a-4af2-9df4-f497a9d77e64" containerName="pull" Feb 16 17:17:04 crc kubenswrapper[4870]: I0216 17:17:04.199907 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="60cefeeb-704a-4af2-9df4-f497a9d77e64" containerName="pull" Feb 16 17:17:04 crc kubenswrapper[4870]: E0216 17:17:04.199920 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60cefeeb-704a-4af2-9df4-f497a9d77e64" containerName="util" Feb 16 17:17:04 crc kubenswrapper[4870]: I0216 17:17:04.199926 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="60cefeeb-704a-4af2-9df4-f497a9d77e64" containerName="util" Feb 16 17:17:04 crc kubenswrapper[4870]: E0216 17:17:04.199946 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60cefeeb-704a-4af2-9df4-f497a9d77e64" containerName="extract" Feb 16 17:17:04 crc kubenswrapper[4870]: I0216 17:17:04.199963 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="60cefeeb-704a-4af2-9df4-f497a9d77e64" containerName="extract" Feb 16 17:17:04 crc kubenswrapper[4870]: I0216 17:17:04.200090 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="60cefeeb-704a-4af2-9df4-f497a9d77e64" containerName="extract" Feb 16 17:17:04 crc kubenswrapper[4870]: I0216 17:17:04.200502 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6f655b9d6d-rjqbs" Feb 16 17:17:04 crc kubenswrapper[4870]: I0216 17:17:04.202712 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-cgf87" Feb 16 17:17:04 crc kubenswrapper[4870]: I0216 17:17:04.216967 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llwgn\" (UniqueName: \"kubernetes.io/projected/c8592842-a07d-45a3-a74c-f322156994b2-kube-api-access-llwgn\") pod \"openstack-operator-controller-init-6f655b9d6d-rjqbs\" (UID: \"c8592842-a07d-45a3-a74c-f322156994b2\") " pod="openstack-operators/openstack-operator-controller-init-6f655b9d6d-rjqbs" Feb 16 17:17:04 crc kubenswrapper[4870]: I0216 17:17:04.236633 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6f655b9d6d-rjqbs"] Feb 16 17:17:04 crc kubenswrapper[4870]: I0216 17:17:04.317962 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llwgn\" (UniqueName: \"kubernetes.io/projected/c8592842-a07d-45a3-a74c-f322156994b2-kube-api-access-llwgn\") pod \"openstack-operator-controller-init-6f655b9d6d-rjqbs\" (UID: \"c8592842-a07d-45a3-a74c-f322156994b2\") " pod="openstack-operators/openstack-operator-controller-init-6f655b9d6d-rjqbs" Feb 16 17:17:04 crc kubenswrapper[4870]: I0216 17:17:04.347648 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llwgn\" (UniqueName: \"kubernetes.io/projected/c8592842-a07d-45a3-a74c-f322156994b2-kube-api-access-llwgn\") pod \"openstack-operator-controller-init-6f655b9d6d-rjqbs\" (UID: \"c8592842-a07d-45a3-a74c-f322156994b2\") " pod="openstack-operators/openstack-operator-controller-init-6f655b9d6d-rjqbs" Feb 16 17:17:04 crc kubenswrapper[4870]: I0216 17:17:04.523247 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6f655b9d6d-rjqbs" Feb 16 17:17:04 crc kubenswrapper[4870]: I0216 17:17:04.938266 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6f655b9d6d-rjqbs"] Feb 16 17:17:04 crc kubenswrapper[4870]: W0216 17:17:04.945079 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8592842_a07d_45a3_a74c_f322156994b2.slice/crio-d020ee3098f7e77505df878c115b5fd58e9c8c0ad903ef444d00a6a480923dc1 WatchSource:0}: Error finding container d020ee3098f7e77505df878c115b5fd58e9c8c0ad903ef444d00a6a480923dc1: Status 404 returned error can't find the container with id d020ee3098f7e77505df878c115b5fd58e9c8c0ad903ef444d00a6a480923dc1 Feb 16 17:17:04 crc kubenswrapper[4870]: I0216 17:17:04.980723 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6f655b9d6d-rjqbs" event={"ID":"c8592842-a07d-45a3-a74c-f322156994b2","Type":"ContainerStarted","Data":"d020ee3098f7e77505df878c115b5fd58e9c8c0ad903ef444d00a6a480923dc1"} Feb 16 17:17:09 crc kubenswrapper[4870]: I0216 17:17:09.010210 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6f655b9d6d-rjqbs" event={"ID":"c8592842-a07d-45a3-a74c-f322156994b2","Type":"ContainerStarted","Data":"d386648b27f421c3dc3cb72cb6433fe92d0afefc2f1afd61a17210a901c01ac8"} Feb 16 17:17:09 crc kubenswrapper[4870]: I0216 17:17:09.012305 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6f655b9d6d-rjqbs" Feb 16 17:17:09 crc kubenswrapper[4870]: I0216 17:17:09.042026 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6f655b9d6d-rjqbs" podStartSLOduration=1.495707671 podStartE2EDuration="5.042008254s" podCreationTimestamp="2026-02-16 17:17:04 +0000 UTC" firstStartedPulling="2026-02-16 17:17:04.947084076 +0000 UTC m=+1029.430548460" lastFinishedPulling="2026-02-16 17:17:08.493384659 +0000 UTC m=+1032.976849043" observedRunningTime="2026-02-16 17:17:09.037789715 +0000 UTC m=+1033.521254119" watchObservedRunningTime="2026-02-16 17:17:09.042008254 +0000 UTC m=+1033.525472638" Feb 16 17:17:14 crc kubenswrapper[4870]: I0216 17:17:14.526485 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6f655b9d6d-rjqbs" Feb 16 17:17:35 crc kubenswrapper[4870]: I0216 17:17:35.366580 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:17:35 crc kubenswrapper[4870]: I0216 17:17:35.367133 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.143570 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-cchkt"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.144875 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cchkt" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.147518 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-dlxqm" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.162818 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-cchkt"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.167588 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-68kws"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.168546 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-68kws" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.170143 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-ggszz" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.186362 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-68kws"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.194586 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czdn6\" (UniqueName: \"kubernetes.io/projected/e7d3d5ca-7088-46e0-88eb-bc8f1270b85d-kube-api-access-czdn6\") pod \"barbican-operator-controller-manager-868647ff47-cchkt\" (UID: \"e7d3d5ca-7088-46e0-88eb-bc8f1270b85d\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cchkt" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.194672 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq8xs\" (UniqueName: \"kubernetes.io/projected/a104184d-e08b-46ef-8595-6b21f2826f9a-kube-api-access-vq8xs\") pod \"cinder-operator-controller-manager-5d946d989d-68kws\" (UID: \"a104184d-e08b-46ef-8595-6b21f2826f9a\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-68kws" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.196923 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-t9hz5"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.197905 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-t9hz5" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.199494 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-6cknd" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.206019 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-v7gn2"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.206872 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v7gn2" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.209598 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-rtt78" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.227161 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-t9hz5"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.234141 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-gcgvr"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.235249 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-gcgvr" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.240484 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-v7gn2"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.242385 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-zbh9m" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.272316 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hqjtw"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.273397 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hqjtw" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.277242 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-hk9g5" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.279364 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-gcgvr"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.299615 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8blz8\" (UniqueName: \"kubernetes.io/projected/bb375b58-f7fa-4006-b087-cb06ea0cfc86-kube-api-access-8blz8\") pod \"horizon-operator-controller-manager-5b9b8895d5-hqjtw\" (UID: \"bb375b58-f7fa-4006-b087-cb06ea0cfc86\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hqjtw" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.299675 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd8xx\" (UniqueName: \"kubernetes.io/projected/6d98461a-872e-4857-a100-f905a5231b83-kube-api-access-qd8xx\") pod \"heat-operator-controller-manager-69f49c598c-gcgvr\" (UID: \"6d98461a-872e-4857-a100-f905a5231b83\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-gcgvr" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.299701 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czdn6\" (UniqueName: \"kubernetes.io/projected/e7d3d5ca-7088-46e0-88eb-bc8f1270b85d-kube-api-access-czdn6\") pod \"barbican-operator-controller-manager-868647ff47-cchkt\" (UID: \"e7d3d5ca-7088-46e0-88eb-bc8f1270b85d\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cchkt" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.299766 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqq7p\" (UniqueName: \"kubernetes.io/projected/aef27f45-8abf-44f3-a290-c6d49dbfa1fd-kube-api-access-jqq7p\") pod \"glance-operator-controller-manager-77987464f4-t9hz5\" (UID: \"aef27f45-8abf-44f3-a290-c6d49dbfa1fd\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-t9hz5" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.299786 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqqhf\" (UniqueName: \"kubernetes.io/projected/0d52da13-82bf-439f-ac03-bbf3f539de78-kube-api-access-wqqhf\") pod \"designate-operator-controller-manager-6d8bf5c495-v7gn2\" (UID: \"0d52da13-82bf-439f-ac03-bbf3f539de78\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v7gn2" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.299813 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq8xs\" (UniqueName: \"kubernetes.io/projected/a104184d-e08b-46ef-8595-6b21f2826f9a-kube-api-access-vq8xs\") pod \"cinder-operator-controller-manager-5d946d989d-68kws\" (UID: \"a104184d-e08b-46ef-8595-6b21f2826f9a\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-68kws" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.324133 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.325055 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.337909 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-wsdch" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.338236 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hqjtw"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.338869 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.360119 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.368101 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-ns7zz"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.371787 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq8xs\" (UniqueName: \"kubernetes.io/projected/a104184d-e08b-46ef-8595-6b21f2826f9a-kube-api-access-vq8xs\") pod \"cinder-operator-controller-manager-5d946d989d-68kws\" (UID: \"a104184d-e08b-46ef-8595-6b21f2826f9a\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-68kws" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.372737 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czdn6\" (UniqueName: \"kubernetes.io/projected/e7d3d5ca-7088-46e0-88eb-bc8f1270b85d-kube-api-access-czdn6\") pod \"barbican-operator-controller-manager-868647ff47-cchkt\" (UID: \"e7d3d5ca-7088-46e0-88eb-bc8f1270b85d\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cchkt" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.375762 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ns7zz" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.381372 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-6gvl7" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.386465 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-b9czx"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.387511 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-b9czx" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.394296 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-ns7zz"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.394368 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-4klhx" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.398346 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-k468x"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.399170 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-k468x" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.400963 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqq7p\" (UniqueName: \"kubernetes.io/projected/aef27f45-8abf-44f3-a290-c6d49dbfa1fd-kube-api-access-jqq7p\") pod \"glance-operator-controller-manager-77987464f4-t9hz5\" (UID: \"aef27f45-8abf-44f3-a290-c6d49dbfa1fd\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-t9hz5" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.401010 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqqhf\" (UniqueName: \"kubernetes.io/projected/0d52da13-82bf-439f-ac03-bbf3f539de78-kube-api-access-wqqhf\") pod \"designate-operator-controller-manager-6d8bf5c495-v7gn2\" (UID: \"0d52da13-82bf-439f-ac03-bbf3f539de78\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v7gn2" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.401068 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqgt9\" (UniqueName: \"kubernetes.io/projected/dd026e8b-7f8a-4c07-9bee-84e0fe1e535f-kube-api-access-dqgt9\") pod \"keystone-operator-controller-manager-b4d948c87-b9czx\" (UID: \"dd026e8b-7f8a-4c07-9bee-84e0fe1e535f\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-b9czx" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.401101 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8blz8\" (UniqueName: \"kubernetes.io/projected/bb375b58-f7fa-4006-b087-cb06ea0cfc86-kube-api-access-8blz8\") pod \"horizon-operator-controller-manager-5b9b8895d5-hqjtw\" (UID: \"bb375b58-f7fa-4006-b087-cb06ea0cfc86\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hqjtw" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.401142 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrvhm\" (UniqueName: \"kubernetes.io/projected/2c0e615e-3bf7-4627-b800-af60affed5f5-kube-api-access-mrvhm\") pod \"ironic-operator-controller-manager-554564d7fc-ns7zz\" (UID: \"2c0e615e-3bf7-4627-b800-af60affed5f5\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ns7zz" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.401171 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2lx2\" (UniqueName: \"kubernetes.io/projected/5d8420fa-9cbd-47f7-a252-a187de8515cd-kube-api-access-n2lx2\") pod \"infra-operator-controller-manager-79d975b745-4cm6m\" (UID: \"5d8420fa-9cbd-47f7-a252-a187de8515cd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.401215 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd8xx\" (UniqueName: \"kubernetes.io/projected/6d98461a-872e-4857-a100-f905a5231b83-kube-api-access-qd8xx\") pod \"heat-operator-controller-manager-69f49c598c-gcgvr\" (UID: \"6d98461a-872e-4857-a100-f905a5231b83\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-gcgvr" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.401256 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d8420fa-9cbd-47f7-a252-a187de8515cd-cert\") pod \"infra-operator-controller-manager-79d975b745-4cm6m\" (UID: \"5d8420fa-9cbd-47f7-a252-a187de8515cd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.401297 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwxtq\" (UniqueName: \"kubernetes.io/projected/65381e3b-70d2-4dbf-a1e5-279696c5cc09-kube-api-access-zwxtq\") pod \"manila-operator-controller-manager-54f6768c69-k468x\" (UID: \"65381e3b-70d2-4dbf-a1e5-279696c5cc09\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-k468x" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.402361 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-4twlg" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.453107 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-b9czx"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.470444 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cchkt" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.492650 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-68kws" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.502681 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqq7p\" (UniqueName: \"kubernetes.io/projected/aef27f45-8abf-44f3-a290-c6d49dbfa1fd-kube-api-access-jqq7p\") pod \"glance-operator-controller-manager-77987464f4-t9hz5\" (UID: \"aef27f45-8abf-44f3-a290-c6d49dbfa1fd\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-t9hz5" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.521027 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqgt9\" (UniqueName: \"kubernetes.io/projected/dd026e8b-7f8a-4c07-9bee-84e0fe1e535f-kube-api-access-dqgt9\") pod \"keystone-operator-controller-manager-b4d948c87-b9czx\" (UID: \"dd026e8b-7f8a-4c07-9bee-84e0fe1e535f\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-b9czx" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.521200 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrvhm\" (UniqueName: \"kubernetes.io/projected/2c0e615e-3bf7-4627-b800-af60affed5f5-kube-api-access-mrvhm\") pod \"ironic-operator-controller-manager-554564d7fc-ns7zz\" (UID: \"2c0e615e-3bf7-4627-b800-af60affed5f5\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ns7zz" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.521068 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqqhf\" (UniqueName: \"kubernetes.io/projected/0d52da13-82bf-439f-ac03-bbf3f539de78-kube-api-access-wqqhf\") pod \"designate-operator-controller-manager-6d8bf5c495-v7gn2\" (UID: \"0d52da13-82bf-439f-ac03-bbf3f539de78\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v7gn2" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.521236 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2lx2\" (UniqueName: \"kubernetes.io/projected/5d8420fa-9cbd-47f7-a252-a187de8515cd-kube-api-access-n2lx2\") pod \"infra-operator-controller-manager-79d975b745-4cm6m\" (UID: \"5d8420fa-9cbd-47f7-a252-a187de8515cd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.521333 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d8420fa-9cbd-47f7-a252-a187de8515cd-cert\") pod \"infra-operator-controller-manager-79d975b745-4cm6m\" (UID: \"5d8420fa-9cbd-47f7-a252-a187de8515cd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.521377 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwxtq\" (UniqueName: \"kubernetes.io/projected/65381e3b-70d2-4dbf-a1e5-279696c5cc09-kube-api-access-zwxtq\") pod \"manila-operator-controller-manager-54f6768c69-k468x\" (UID: \"65381e3b-70d2-4dbf-a1e5-279696c5cc09\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-k468x" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.521691 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-t9hz5" Feb 16 17:17:51 crc kubenswrapper[4870]: E0216 17:17:51.521725 4870 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 17:17:51 crc kubenswrapper[4870]: E0216 17:17:51.521770 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d8420fa-9cbd-47f7-a252-a187de8515cd-cert podName:5d8420fa-9cbd-47f7-a252-a187de8515cd nodeName:}" failed. No retries permitted until 2026-02-16 17:17:52.021752076 +0000 UTC m=+1076.505216460 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d8420fa-9cbd-47f7-a252-a187de8515cd-cert") pod "infra-operator-controller-manager-79d975b745-4cm6m" (UID: "5d8420fa-9cbd-47f7-a252-a187de8515cd") : secret "infra-operator-webhook-server-cert" not found Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.529432 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-58msl"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.531557 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-58msl" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.538610 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v7gn2" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.554428 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-8rxbz" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.575518 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2lx2\" (UniqueName: \"kubernetes.io/projected/5d8420fa-9cbd-47f7-a252-a187de8515cd-kube-api-access-n2lx2\") pod \"infra-operator-controller-manager-79d975b745-4cm6m\" (UID: \"5d8420fa-9cbd-47f7-a252-a187de8515cd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.576091 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrvhm\" (UniqueName: \"kubernetes.io/projected/2c0e615e-3bf7-4627-b800-af60affed5f5-kube-api-access-mrvhm\") pod \"ironic-operator-controller-manager-554564d7fc-ns7zz\" (UID: \"2c0e615e-3bf7-4627-b800-af60affed5f5\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ns7zz" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.577716 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwxtq\" (UniqueName: \"kubernetes.io/projected/65381e3b-70d2-4dbf-a1e5-279696c5cc09-kube-api-access-zwxtq\") pod \"manila-operator-controller-manager-54f6768c69-k468x\" (UID: \"65381e3b-70d2-4dbf-a1e5-279696c5cc09\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-k468x" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.580796 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd8xx\" (UniqueName: \"kubernetes.io/projected/6d98461a-872e-4857-a100-f905a5231b83-kube-api-access-qd8xx\") pod \"heat-operator-controller-manager-69f49c598c-gcgvr\" (UID: \"6d98461a-872e-4857-a100-f905a5231b83\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-gcgvr" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.580964 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-k468x"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.582890 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8blz8\" (UniqueName: \"kubernetes.io/projected/bb375b58-f7fa-4006-b087-cb06ea0cfc86-kube-api-access-8blz8\") pod \"horizon-operator-controller-manager-5b9b8895d5-hqjtw\" (UID: \"bb375b58-f7fa-4006-b087-cb06ea0cfc86\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hqjtw" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.611304 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hqjtw" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.617728 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqgt9\" (UniqueName: \"kubernetes.io/projected/dd026e8b-7f8a-4c07-9bee-84e0fe1e535f-kube-api-access-dqgt9\") pod \"keystone-operator-controller-manager-b4d948c87-b9czx\" (UID: \"dd026e8b-7f8a-4c07-9bee-84e0fe1e535f\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-b9czx" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.657787 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-58msl"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.658577 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8njz\" (UniqueName: \"kubernetes.io/projected/7a88f711-61b7-44b5-a82e-5c909efc50e9-kube-api-access-c8njz\") pod \"mariadb-operator-controller-manager-6994f66f48-58msl\" (UID: \"7a88f711-61b7-44b5-a82e-5c909efc50e9\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-58msl" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.672746 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-k468x" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.751067 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-nx2h7"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.752171 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-nx2h7" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.753476 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ns7zz" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.758654 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-dbcp6" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.759910 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8njz\" (UniqueName: \"kubernetes.io/projected/7a88f711-61b7-44b5-a82e-5c909efc50e9-kube-api-access-c8njz\") pod \"mariadb-operator-controller-manager-6994f66f48-58msl\" (UID: \"7a88f711-61b7-44b5-a82e-5c909efc50e9\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-58msl" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.796808 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8njz\" (UniqueName: \"kubernetes.io/projected/7a88f711-61b7-44b5-a82e-5c909efc50e9-kube-api-access-c8njz\") pod \"mariadb-operator-controller-manager-6994f66f48-58msl\" (UID: \"7a88f711-61b7-44b5-a82e-5c909efc50e9\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-58msl" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.856199 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-bfq8r"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.857415 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-bfq8r" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.860435 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-jffpn" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.862431 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-gcgvr" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.862758 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnccx\" (UniqueName: \"kubernetes.io/projected/a8be2317-5b27-4e2b-b403-66d75647fda1-kube-api-access-vnccx\") pod \"neutron-operator-controller-manager-64ddbf8bb-nx2h7\" (UID: \"a8be2317-5b27-4e2b-b403-66d75647fda1\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-nx2h7" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.910278 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-wzn2r"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.911458 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wzn2r" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.912396 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-b9czx" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.948334 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-9nsdj" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.956353 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-nx2h7"] Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.963821 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnccx\" (UniqueName: \"kubernetes.io/projected/a8be2317-5b27-4e2b-b403-66d75647fda1-kube-api-access-vnccx\") pod \"neutron-operator-controller-manager-64ddbf8bb-nx2h7\" (UID: \"a8be2317-5b27-4e2b-b403-66d75647fda1\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-nx2h7" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.963938 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms2nw\" (UniqueName: \"kubernetes.io/projected/3e3dfdb0-abc8-458b-811a-752f6bd9430e-kube-api-access-ms2nw\") pod \"nova-operator-controller-manager-567668f5cf-wzn2r\" (UID: \"3e3dfdb0-abc8-458b-811a-752f6bd9430e\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wzn2r" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.964746 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j46n\" (UniqueName: \"kubernetes.io/projected/256aea85-8852-4a34-98ef-3c9e07b30453-kube-api-access-9j46n\") pod \"octavia-operator-controller-manager-69f8888797-bfq8r\" (UID: \"256aea85-8852-4a34-98ef-3c9e07b30453\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-bfq8r" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.982488 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-58msl" Feb 16 17:17:51 crc kubenswrapper[4870]: I0216 17:17:51.998908 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnccx\" (UniqueName: \"kubernetes.io/projected/a8be2317-5b27-4e2b-b403-66d75647fda1-kube-api-access-vnccx\") pod \"neutron-operator-controller-manager-64ddbf8bb-nx2h7\" (UID: \"a8be2317-5b27-4e2b-b403-66d75647fda1\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-nx2h7" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.000131 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-bfq8r"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.024502 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-wzn2r"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.060344 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-cnmml"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.061300 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cnmml" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.073415 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-2cs92" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.074888 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d8420fa-9cbd-47f7-a252-a187de8515cd-cert\") pod \"infra-operator-controller-manager-79d975b745-4cm6m\" (UID: \"5d8420fa-9cbd-47f7-a252-a187de8515cd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.074916 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms2nw\" (UniqueName: \"kubernetes.io/projected/3e3dfdb0-abc8-458b-811a-752f6bd9430e-kube-api-access-ms2nw\") pod \"nova-operator-controller-manager-567668f5cf-wzn2r\" (UID: \"3e3dfdb0-abc8-458b-811a-752f6bd9430e\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wzn2r" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.074940 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j46n\" (UniqueName: \"kubernetes.io/projected/256aea85-8852-4a34-98ef-3c9e07b30453-kube-api-access-9j46n\") pod \"octavia-operator-controller-manager-69f8888797-bfq8r\" (UID: \"256aea85-8852-4a34-98ef-3c9e07b30453\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-bfq8r" Feb 16 17:17:52 crc kubenswrapper[4870]: E0216 17:17:52.075282 4870 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 17:17:52 crc kubenswrapper[4870]: E0216 17:17:52.075323 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d8420fa-9cbd-47f7-a252-a187de8515cd-cert podName:5d8420fa-9cbd-47f7-a252-a187de8515cd nodeName:}" failed. No retries permitted until 2026-02-16 17:17:53.07530768 +0000 UTC m=+1077.558772064 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d8420fa-9cbd-47f7-a252-a187de8515cd-cert") pod "infra-operator-controller-manager-79d975b745-4cm6m" (UID: "5d8420fa-9cbd-47f7-a252-a187de8515cd") : secret "infra-operator-webhook-server-cert" not found Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.078817 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.079824 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.090509 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-lqndj" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.090706 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.090730 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-d6pgp"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.091481 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d6pgp" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.095514 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-9p7ck" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.099888 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j46n\" (UniqueName: \"kubernetes.io/projected/256aea85-8852-4a34-98ef-3c9e07b30453-kube-api-access-9j46n\") pod \"octavia-operator-controller-manager-69f8888797-bfq8r\" (UID: \"256aea85-8852-4a34-98ef-3c9e07b30453\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-bfq8r" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.102340 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-nx2h7" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.113779 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms2nw\" (UniqueName: \"kubernetes.io/projected/3e3dfdb0-abc8-458b-811a-752f6bd9430e-kube-api-access-ms2nw\") pod \"nova-operator-controller-manager-567668f5cf-wzn2r\" (UID: \"3e3dfdb0-abc8-458b-811a-752f6bd9430e\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wzn2r" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.176512 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfls5\" (UniqueName: \"kubernetes.io/projected/5d2f7cba-dd0c-43f0-9b4b-a2097c4a457a-kube-api-access-pfls5\") pod \"placement-operator-controller-manager-8497b45c89-d6pgp\" (UID: \"5d2f7cba-dd0c-43f0-9b4b-a2097c4a457a\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d6pgp" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.176809 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5\" (UID: \"5f7c2918-26fd-46fb-bae6-52fbdd3eded7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.176857 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5c88\" (UniqueName: \"kubernetes.io/projected/98cc8d06-5e5f-406b-9212-0053c9c66238-kube-api-access-w5c88\") pod \"ovn-operator-controller-manager-d44cf6b75-cnmml\" (UID: \"98cc8d06-5e5f-406b-9212-0053c9c66238\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cnmml" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.176898 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r9m9\" (UniqueName: \"kubernetes.io/projected/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-kube-api-access-5r9m9\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5\" (UID: \"5f7c2918-26fd-46fb-bae6-52fbdd3eded7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.185594 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-cnmml"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.200848 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-bfq8r" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.217062 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.246155 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wzn2r" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.249030 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-wr4hc"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.250307 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wr4hc" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.252783 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-bwkzp" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.265703 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5884f785c-hjhdz"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.266911 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-hjhdz" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.272576 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dw65h" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.278086 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5c88\" (UniqueName: \"kubernetes.io/projected/98cc8d06-5e5f-406b-9212-0053c9c66238-kube-api-access-w5c88\") pod \"ovn-operator-controller-manager-d44cf6b75-cnmml\" (UID: \"98cc8d06-5e5f-406b-9212-0053c9c66238\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cnmml" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.278154 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r9m9\" (UniqueName: \"kubernetes.io/projected/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-kube-api-access-5r9m9\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5\" (UID: \"5f7c2918-26fd-46fb-bae6-52fbdd3eded7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.278280 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfls5\" (UniqueName: \"kubernetes.io/projected/5d2f7cba-dd0c-43f0-9b4b-a2097c4a457a-kube-api-access-pfls5\") pod \"placement-operator-controller-manager-8497b45c89-d6pgp\" (UID: \"5d2f7cba-dd0c-43f0-9b4b-a2097c4a457a\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d6pgp" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.278320 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5\" (UID: \"5f7c2918-26fd-46fb-bae6-52fbdd3eded7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" Feb 16 17:17:52 crc kubenswrapper[4870]: E0216 17:17:52.278470 4870 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:17:52 crc kubenswrapper[4870]: E0216 17:17:52.278530 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-cert podName:5f7c2918-26fd-46fb-bae6-52fbdd3eded7 nodeName:}" failed. No retries permitted until 2026-02-16 17:17:52.778509507 +0000 UTC m=+1077.261973891 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" (UID: "5f7c2918-26fd-46fb-bae6-52fbdd3eded7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.278995 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-d6pgp"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.296851 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-wr4hc"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.302115 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5884f785c-hjhdz"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.311470 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-lvqfx"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.313099 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-lvqfx" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.319694 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-lvqfx"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.320756 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r9m9\" (UniqueName: \"kubernetes.io/projected/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-kube-api-access-5r9m9\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5\" (UID: \"5f7c2918-26fd-46fb-bae6-52fbdd3eded7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.321537 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-rl7j9" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.324655 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5c88\" (UniqueName: \"kubernetes.io/projected/98cc8d06-5e5f-406b-9212-0053c9c66238-kube-api-access-w5c88\") pod \"ovn-operator-controller-manager-d44cf6b75-cnmml\" (UID: \"98cc8d06-5e5f-406b-9212-0053c9c66238\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cnmml" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.324853 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfls5\" (UniqueName: \"kubernetes.io/projected/5d2f7cba-dd0c-43f0-9b4b-a2097c4a457a-kube-api-access-pfls5\") pod \"placement-operator-controller-manager-8497b45c89-d6pgp\" (UID: \"5d2f7cba-dd0c-43f0-9b4b-a2097c4a457a\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d6pgp" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.342228 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-8cf9q"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.343129 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8cf9q" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.345523 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-9nh62" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.360838 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-8cf9q"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.380211 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hflkg\" (UniqueName: \"kubernetes.io/projected/e7030097-1d81-471d-8731-13a271f38050-kube-api-access-hflkg\") pod \"test-operator-controller-manager-7866795846-lvqfx\" (UID: \"e7030097-1d81-471d-8731-13a271f38050\") " pod="openstack-operators/test-operator-controller-manager-7866795846-lvqfx" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.380296 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxtt9\" (UniqueName: \"kubernetes.io/projected/3abc2c2d-aaa8-42a3-876f-1107127dab28-kube-api-access-pxtt9\") pod \"telemetry-operator-controller-manager-5884f785c-hjhdz\" (UID: \"3abc2c2d-aaa8-42a3-876f-1107127dab28\") " pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-hjhdz" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.380332 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjw96\" (UniqueName: \"kubernetes.io/projected/21ffa52a-74b8-444f-a1b6-95dfb4096974-kube-api-access-kjw96\") pod \"swift-operator-controller-manager-68f46476f-wr4hc\" (UID: \"21ffa52a-74b8-444f-a1b6-95dfb4096974\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-wr4hc" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.418967 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cnmml" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.419640 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.420712 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.426844 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.426856 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.427241 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-9sbdw" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.446548 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d6pgp" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.484113 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcrq7\" (UniqueName: \"kubernetes.io/projected/feb3d8e0-ace7-4aa3-9621-e56d57e7b510-kube-api-access-jcrq7\") pod \"watcher-operator-controller-manager-5db88f68c-8cf9q\" (UID: \"feb3d8e0-ace7-4aa3-9621-e56d57e7b510\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8cf9q" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.484524 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hflkg\" (UniqueName: \"kubernetes.io/projected/e7030097-1d81-471d-8731-13a271f38050-kube-api-access-hflkg\") pod \"test-operator-controller-manager-7866795846-lvqfx\" (UID: \"e7030097-1d81-471d-8731-13a271f38050\") " pod="openstack-operators/test-operator-controller-manager-7866795846-lvqfx" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.484623 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxtt9\" (UniqueName: \"kubernetes.io/projected/3abc2c2d-aaa8-42a3-876f-1107127dab28-kube-api-access-pxtt9\") pod \"telemetry-operator-controller-manager-5884f785c-hjhdz\" (UID: \"3abc2c2d-aaa8-42a3-876f-1107127dab28\") " pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-hjhdz" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.484656 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjw96\" (UniqueName: \"kubernetes.io/projected/21ffa52a-74b8-444f-a1b6-95dfb4096974-kube-api-access-kjw96\") pod \"swift-operator-controller-manager-68f46476f-wr4hc\" (UID: \"21ffa52a-74b8-444f-a1b6-95dfb4096974\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-wr4hc" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.484727 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-metrics-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-f84j7\" (UID: \"44780e56-bccf-4440-b6c6-0333808b2e02\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.484814 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-webhook-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-f84j7\" (UID: \"44780e56-bccf-4440-b6c6-0333808b2e02\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.484855 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tncbz\" (UniqueName: \"kubernetes.io/projected/44780e56-bccf-4440-b6c6-0333808b2e02-kube-api-access-tncbz\") pod \"openstack-operator-controller-manager-6f58b764dd-f84j7\" (UID: \"44780e56-bccf-4440-b6c6-0333808b2e02\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.493540 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.511112 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4rjv"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.517012 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4rjv"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.517140 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4rjv" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.522608 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hflkg\" (UniqueName: \"kubernetes.io/projected/e7030097-1d81-471d-8731-13a271f38050-kube-api-access-hflkg\") pod \"test-operator-controller-manager-7866795846-lvqfx\" (UID: \"e7030097-1d81-471d-8731-13a271f38050\") " pod="openstack-operators/test-operator-controller-manager-7866795846-lvqfx" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.524451 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxtt9\" (UniqueName: \"kubernetes.io/projected/3abc2c2d-aaa8-42a3-876f-1107127dab28-kube-api-access-pxtt9\") pod \"telemetry-operator-controller-manager-5884f785c-hjhdz\" (UID: \"3abc2c2d-aaa8-42a3-876f-1107127dab28\") " pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-hjhdz" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.525256 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-m25sx" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.561035 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjw96\" (UniqueName: \"kubernetes.io/projected/21ffa52a-74b8-444f-a1b6-95dfb4096974-kube-api-access-kjw96\") pod \"swift-operator-controller-manager-68f46476f-wr4hc\" (UID: \"21ffa52a-74b8-444f-a1b6-95dfb4096974\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-wr4hc" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.587195 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-webhook-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-f84j7\" (UID: \"44780e56-bccf-4440-b6c6-0333808b2e02\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:17:52 crc kubenswrapper[4870]: E0216 17:17:52.587243 4870 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.587282 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tncbz\" (UniqueName: \"kubernetes.io/projected/44780e56-bccf-4440-b6c6-0333808b2e02-kube-api-access-tncbz\") pod \"openstack-operator-controller-manager-6f58b764dd-f84j7\" (UID: \"44780e56-bccf-4440-b6c6-0333808b2e02\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:17:52 crc kubenswrapper[4870]: E0216 17:17:52.587310 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-webhook-certs podName:44780e56-bccf-4440-b6c6-0333808b2e02 nodeName:}" failed. No retries permitted until 2026-02-16 17:17:53.087293304 +0000 UTC m=+1077.570757688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-webhook-certs") pod "openstack-operator-controller-manager-6f58b764dd-f84j7" (UID: "44780e56-bccf-4440-b6c6-0333808b2e02") : secret "webhook-server-cert" not found Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.587401 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcrq7\" (UniqueName: \"kubernetes.io/projected/feb3d8e0-ace7-4aa3-9621-e56d57e7b510-kube-api-access-jcrq7\") pod \"watcher-operator-controller-manager-5db88f68c-8cf9q\" (UID: \"feb3d8e0-ace7-4aa3-9621-e56d57e7b510\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8cf9q" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.587534 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z74b\" (UniqueName: \"kubernetes.io/projected/2a0ef8b9-a1a9-4014-b5dd-b7356c6b411c-kube-api-access-9z74b\") pod \"rabbitmq-cluster-operator-manager-668c99d594-k4rjv\" (UID: \"2a0ef8b9-a1a9-4014-b5dd-b7356c6b411c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4rjv" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.587556 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-metrics-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-f84j7\" (UID: \"44780e56-bccf-4440-b6c6-0333808b2e02\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:17:52 crc kubenswrapper[4870]: E0216 17:17:52.587674 4870 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 17:17:52 crc kubenswrapper[4870]: E0216 17:17:52.587723 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-metrics-certs podName:44780e56-bccf-4440-b6c6-0333808b2e02 nodeName:}" failed. No retries permitted until 2026-02-16 17:17:53.087704726 +0000 UTC m=+1077.571169110 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-metrics-certs") pod "openstack-operator-controller-manager-6f58b764dd-f84j7" (UID: "44780e56-bccf-4440-b6c6-0333808b2e02") : secret "metrics-server-cert" not found Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.607183 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcrq7\" (UniqueName: \"kubernetes.io/projected/feb3d8e0-ace7-4aa3-9621-e56d57e7b510-kube-api-access-jcrq7\") pod \"watcher-operator-controller-manager-5db88f68c-8cf9q\" (UID: \"feb3d8e0-ace7-4aa3-9621-e56d57e7b510\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8cf9q" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.608778 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tncbz\" (UniqueName: \"kubernetes.io/projected/44780e56-bccf-4440-b6c6-0333808b2e02-kube-api-access-tncbz\") pod \"openstack-operator-controller-manager-6f58b764dd-f84j7\" (UID: \"44780e56-bccf-4440-b6c6-0333808b2e02\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.630290 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wr4hc" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.659476 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-cchkt"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.690014 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z74b\" (UniqueName: \"kubernetes.io/projected/2a0ef8b9-a1a9-4014-b5dd-b7356c6b411c-kube-api-access-9z74b\") pod \"rabbitmq-cluster-operator-manager-668c99d594-k4rjv\" (UID: \"2a0ef8b9-a1a9-4014-b5dd-b7356c6b411c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4rjv" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.717591 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z74b\" (UniqueName: \"kubernetes.io/projected/2a0ef8b9-a1a9-4014-b5dd-b7356c6b411c-kube-api-access-9z74b\") pod \"rabbitmq-cluster-operator-manager-668c99d594-k4rjv\" (UID: \"2a0ef8b9-a1a9-4014-b5dd-b7356c6b411c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4rjv" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.746839 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-hjhdz" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.759268 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-lvqfx" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.794012 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5\" (UID: \"5f7c2918-26fd-46fb-bae6-52fbdd3eded7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" Feb 16 17:17:52 crc kubenswrapper[4870]: E0216 17:17:52.794297 4870 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:17:52 crc kubenswrapper[4870]: E0216 17:17:52.794357 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-cert podName:5f7c2918-26fd-46fb-bae6-52fbdd3eded7 nodeName:}" failed. No retries permitted until 2026-02-16 17:17:53.794337479 +0000 UTC m=+1078.277801863 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" (UID: "5f7c2918-26fd-46fb-bae6-52fbdd3eded7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.872304 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-k468x"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.889678 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8cf9q" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.921440 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-68kws"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.921769 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4rjv" Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.941264 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-v7gn2"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.964998 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-ns7zz"] Feb 16 17:17:52 crc kubenswrapper[4870]: I0216 17:17:52.980174 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-t9hz5"] Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.014914 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-gcgvr"] Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.023742 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hqjtw"] Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.030400 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-58msl"] Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.046836 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-b9czx"] Feb 16 17:17:53 crc kubenswrapper[4870]: W0216 17:17:53.066879 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd026e8b_7f8a_4c07_9bee_84e0fe1e535f.slice/crio-5cba4240ba0f4e7835adfbe26edcd202ada1a7977866c18e2f7100dd19f628dc WatchSource:0}: Error finding container 5cba4240ba0f4e7835adfbe26edcd202ada1a7977866c18e2f7100dd19f628dc: Status 404 returned error can't find the container with id 5cba4240ba0f4e7835adfbe26edcd202ada1a7977866c18e2f7100dd19f628dc Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.109971 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d8420fa-9cbd-47f7-a252-a187de8515cd-cert\") pod \"infra-operator-controller-manager-79d975b745-4cm6m\" (UID: \"5d8420fa-9cbd-47f7-a252-a187de8515cd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m" Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.110024 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-webhook-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-f84j7\" (UID: \"44780e56-bccf-4440-b6c6-0333808b2e02\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.110130 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-metrics-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-f84j7\" (UID: \"44780e56-bccf-4440-b6c6-0333808b2e02\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:17:53 crc kubenswrapper[4870]: E0216 17:17:53.110161 4870 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 17:17:53 crc kubenswrapper[4870]: E0216 17:17:53.110269 4870 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 17:17:53 crc kubenswrapper[4870]: E0216 17:17:53.110341 4870 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 17:17:53 crc kubenswrapper[4870]: E0216 17:17:53.110272 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d8420fa-9cbd-47f7-a252-a187de8515cd-cert podName:5d8420fa-9cbd-47f7-a252-a187de8515cd nodeName:}" failed. No retries permitted until 2026-02-16 17:17:55.110232446 +0000 UTC m=+1079.593696830 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d8420fa-9cbd-47f7-a252-a187de8515cd-cert") pod "infra-operator-controller-manager-79d975b745-4cm6m" (UID: "5d8420fa-9cbd-47f7-a252-a187de8515cd") : secret "infra-operator-webhook-server-cert" not found Feb 16 17:17:53 crc kubenswrapper[4870]: E0216 17:17:53.110377 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-metrics-certs podName:44780e56-bccf-4440-b6c6-0333808b2e02 nodeName:}" failed. No retries permitted until 2026-02-16 17:17:54.1103614 +0000 UTC m=+1078.593825784 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-metrics-certs") pod "openstack-operator-controller-manager-6f58b764dd-f84j7" (UID: "44780e56-bccf-4440-b6c6-0333808b2e02") : secret "metrics-server-cert" not found Feb 16 17:17:53 crc kubenswrapper[4870]: E0216 17:17:53.110387 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-webhook-certs podName:44780e56-bccf-4440-b6c6-0333808b2e02 nodeName:}" failed. No retries permitted until 2026-02-16 17:17:54.11038198 +0000 UTC m=+1078.593846364 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-webhook-certs") pod "openstack-operator-controller-manager-6f58b764dd-f84j7" (UID: "44780e56-bccf-4440-b6c6-0333808b2e02") : secret "webhook-server-cert" not found Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.308043 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-cnmml"] Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.324003 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-nx2h7"] Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.327720 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-wzn2r"] Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.368825 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-d6pgp"] Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.382398 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-58msl" event={"ID":"7a88f711-61b7-44b5-a82e-5c909efc50e9","Type":"ContainerStarted","Data":"b9fbef854343f78bec0b73c05c231771957c3ddbe8cbf8a71248f48c90083a6f"} Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.398712 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ns7zz" event={"ID":"2c0e615e-3bf7-4627-b800-af60affed5f5","Type":"ContainerStarted","Data":"995ae710e0ce8fe3ae7960d360d8310672681de2339326b1925f677a6cbeb5da"} Feb 16 17:17:53 crc kubenswrapper[4870]: E0216 17:17:53.399226 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pfls5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-d6pgp_openstack-operators(5d2f7cba-dd0c-43f0-9b4b-a2097c4a457a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 17:17:53 crc kubenswrapper[4870]: E0216 17:17:53.400338 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d6pgp" podUID="5d2f7cba-dd0c-43f0-9b4b-a2097c4a457a" Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.406782 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-bfq8r"] Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.410376 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cchkt" event={"ID":"e7d3d5ca-7088-46e0-88eb-bc8f1270b85d","Type":"ContainerStarted","Data":"c3a9af8f3977d8f7a3b03b47a1f3e5f2a5b586cf5212755bdc6341c8e7722c2b"} Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.413073 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-b9czx" event={"ID":"dd026e8b-7f8a-4c07-9bee-84e0fe1e535f","Type":"ContainerStarted","Data":"5cba4240ba0f4e7835adfbe26edcd202ada1a7977866c18e2f7100dd19f628dc"} Feb 16 17:17:53 crc kubenswrapper[4870]: E0216 17:17:53.430687 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kjw96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-wr4hc_openstack-operators(21ffa52a-74b8-444f-a1b6-95dfb4096974): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.430847 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v7gn2" event={"ID":"0d52da13-82bf-439f-ac03-bbf3f539de78","Type":"ContainerStarted","Data":"e128ec05b2453684acb4ec1a2b9d195fcc6ab7ba5c1403fa024364461a6df67a"} Feb 16 17:17:53 crc kubenswrapper[4870]: E0216 17:17:53.434706 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9j46n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-bfq8r_openstack-operators(256aea85-8852-4a34-98ef-3c9e07b30453): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 17:17:53 crc kubenswrapper[4870]: E0216 17:17:53.434892 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wr4hc" podUID="21ffa52a-74b8-444f-a1b6-95dfb4096974" Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.434963 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-wr4hc"] Feb 16 17:17:53 crc kubenswrapper[4870]: E0216 17:17:53.436036 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-bfq8r" podUID="256aea85-8852-4a34-98ef-3c9e07b30453" Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.439825 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hqjtw" event={"ID":"bb375b58-f7fa-4006-b087-cb06ea0cfc86","Type":"ContainerStarted","Data":"d9f8534e026eef6f4ff33100309b91dcbf9e5fbc7ad5adc2d478c05de29a009c"} Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.445701 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-gcgvr" event={"ID":"6d98461a-872e-4857-a100-f905a5231b83","Type":"ContainerStarted","Data":"3af94fddebaada09dd545756a6a841dd37c2cf9784e6709b6fa725013cef1f5c"} Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.450617 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-k468x" event={"ID":"65381e3b-70d2-4dbf-a1e5-279696c5cc09","Type":"ContainerStarted","Data":"90a0807038144dfd1cbd2bbff5cecb9845175250fb2eef65be046d3eb9b7fa43"} Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.452303 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5884f785c-hjhdz"] Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.452992 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-t9hz5" event={"ID":"aef27f45-8abf-44f3-a290-c6d49dbfa1fd","Type":"ContainerStarted","Data":"5fad807731f0c47805313f4bfac8154cc92a190443a47dc26389647faef27d51"} Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.462902 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-68kws" event={"ID":"a104184d-e08b-46ef-8595-6b21f2826f9a","Type":"ContainerStarted","Data":"be5f7d6eecb3db07aba72f92aaa858cf994fba801c4cf29d6daf939615b96a1a"} Feb 16 17:17:53 crc kubenswrapper[4870]: W0216 17:17:53.466172 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3abc2c2d_aaa8_42a3_876f_1107127dab28.slice/crio-69ad5312a16e66d3c9d55be7afc98aab5c19cf470369e8053f34d534f55d57ee WatchSource:0}: Error finding container 69ad5312a16e66d3c9d55be7afc98aab5c19cf470369e8053f34d534f55d57ee: Status 404 returned error can't find the container with id 69ad5312a16e66d3c9d55be7afc98aab5c19cf470369e8053f34d534f55d57ee Feb 16 17:17:53 crc kubenswrapper[4870]: E0216 17:17:53.472642 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.64:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pxtt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5884f785c-hjhdz_openstack-operators(3abc2c2d-aaa8-42a3-876f-1107127dab28): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 17:17:53 crc kubenswrapper[4870]: E0216 17:17:53.475000 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-hjhdz" podUID="3abc2c2d-aaa8-42a3-876f-1107127dab28" Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.482883 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-8cf9q"] Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.517825 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-lvqfx"] Feb 16 17:17:53 crc kubenswrapper[4870]: E0216 17:17:53.531005 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hflkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-lvqfx_openstack-operators(e7030097-1d81-471d-8731-13a271f38050): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 17:17:53 crc kubenswrapper[4870]: E0216 17:17:53.532929 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-lvqfx" podUID="e7030097-1d81-471d-8731-13a271f38050" Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.755349 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4rjv"] Feb 16 17:17:53 crc kubenswrapper[4870]: W0216 17:17:53.758917 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a0ef8b9_a1a9_4014_b5dd_b7356c6b411c.slice/crio-34afeb02da13e31c86bfe3e5d15eaa943e3d34119f333c2578fb22720b25de29 WatchSource:0}: Error finding container 34afeb02da13e31c86bfe3e5d15eaa943e3d34119f333c2578fb22720b25de29: Status 404 returned error can't find the container with id 34afeb02da13e31c86bfe3e5d15eaa943e3d34119f333c2578fb22720b25de29 Feb 16 17:17:53 crc kubenswrapper[4870]: I0216 17:17:53.835038 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5\" (UID: \"5f7c2918-26fd-46fb-bae6-52fbdd3eded7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" Feb 16 17:17:53 crc kubenswrapper[4870]: E0216 17:17:53.835187 4870 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:17:53 crc kubenswrapper[4870]: E0216 17:17:53.835265 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-cert podName:5f7c2918-26fd-46fb-bae6-52fbdd3eded7 nodeName:}" failed. No retries permitted until 2026-02-16 17:17:55.835246814 +0000 UTC m=+1080.318711198 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" (UID: "5f7c2918-26fd-46fb-bae6-52fbdd3eded7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:17:54 crc kubenswrapper[4870]: I0216 17:17:54.151727 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-webhook-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-f84j7\" (UID: \"44780e56-bccf-4440-b6c6-0333808b2e02\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:17:54 crc kubenswrapper[4870]: I0216 17:17:54.151900 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-metrics-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-f84j7\" (UID: \"44780e56-bccf-4440-b6c6-0333808b2e02\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:17:54 crc kubenswrapper[4870]: E0216 17:17:54.152011 4870 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 17:17:54 crc kubenswrapper[4870]: E0216 17:17:54.152079 4870 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 17:17:54 crc kubenswrapper[4870]: E0216 17:17:54.152098 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-webhook-certs podName:44780e56-bccf-4440-b6c6-0333808b2e02 nodeName:}" failed. No retries permitted until 2026-02-16 17:17:56.152076828 +0000 UTC m=+1080.635541222 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-webhook-certs") pod "openstack-operator-controller-manager-6f58b764dd-f84j7" (UID: "44780e56-bccf-4440-b6c6-0333808b2e02") : secret "webhook-server-cert" not found Feb 16 17:17:54 crc kubenswrapper[4870]: E0216 17:17:54.152144 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-metrics-certs podName:44780e56-bccf-4440-b6c6-0333808b2e02 nodeName:}" failed. No retries permitted until 2026-02-16 17:17:56.152124839 +0000 UTC m=+1080.635589233 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-metrics-certs") pod "openstack-operator-controller-manager-6f58b764dd-f84j7" (UID: "44780e56-bccf-4440-b6c6-0333808b2e02") : secret "metrics-server-cert" not found Feb 16 17:17:54 crc kubenswrapper[4870]: I0216 17:17:54.492190 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wzn2r" event={"ID":"3e3dfdb0-abc8-458b-811a-752f6bd9430e","Type":"ContainerStarted","Data":"cc3be308601d7cd50c4a5f523420d7188986971d23892c3beab1febe9368756f"} Feb 16 17:17:54 crc kubenswrapper[4870]: I0216 17:17:54.493874 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-hjhdz" event={"ID":"3abc2c2d-aaa8-42a3-876f-1107127dab28","Type":"ContainerStarted","Data":"69ad5312a16e66d3c9d55be7afc98aab5c19cf470369e8053f34d534f55d57ee"} Feb 16 17:17:54 crc kubenswrapper[4870]: E0216 17:17:54.495901 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.64:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-hjhdz" podUID="3abc2c2d-aaa8-42a3-876f-1107127dab28" Feb 16 17:17:54 crc kubenswrapper[4870]: I0216 17:17:54.496294 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cnmml" event={"ID":"98cc8d06-5e5f-406b-9212-0053c9c66238","Type":"ContainerStarted","Data":"9d670b2e66bfb5ed937f2177a8f0c42adc8e3a7f9247cfaa718f6883aa514b1b"} Feb 16 17:17:54 crc kubenswrapper[4870]: I0216 17:17:54.500644 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d6pgp" event={"ID":"5d2f7cba-dd0c-43f0-9b4b-a2097c4a457a","Type":"ContainerStarted","Data":"e31d172d49144bee89e4a074f9e6ad51e14a548c1711cb06e8a936370e8a459b"} Feb 16 17:17:54 crc kubenswrapper[4870]: E0216 17:17:54.501773 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d6pgp" podUID="5d2f7cba-dd0c-43f0-9b4b-a2097c4a457a" Feb 16 17:17:54 crc kubenswrapper[4870]: I0216 17:17:54.511226 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-nx2h7" event={"ID":"a8be2317-5b27-4e2b-b403-66d75647fda1","Type":"ContainerStarted","Data":"15edefb64f84e511eda6bd0d429efa7f5e49c0efa2d8ff5eb89e2eaaed534b54"} Feb 16 17:17:54 crc kubenswrapper[4870]: I0216 17:17:54.517780 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-bfq8r" event={"ID":"256aea85-8852-4a34-98ef-3c9e07b30453","Type":"ContainerStarted","Data":"2d4a04098b1d287088dd4d7265660911f212fbe2d8f8faad1d5d29321bc0f276"} Feb 16 17:17:54 crc kubenswrapper[4870]: E0216 17:17:54.519862 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-bfq8r" podUID="256aea85-8852-4a34-98ef-3c9e07b30453" Feb 16 17:17:54 crc kubenswrapper[4870]: I0216 17:17:54.520550 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wr4hc" event={"ID":"21ffa52a-74b8-444f-a1b6-95dfb4096974","Type":"ContainerStarted","Data":"d86d8a48dfbc732c829c5387824de61a704d3ccb5f6e358d582faaf369927042"} Feb 16 17:17:54 crc kubenswrapper[4870]: I0216 17:17:54.530901 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-lvqfx" event={"ID":"e7030097-1d81-471d-8731-13a271f38050","Type":"ContainerStarted","Data":"b242442283985ac6e2abcee2094e5e2848d62e73dc1252b35434f4c76a090d9a"} Feb 16 17:17:54 crc kubenswrapper[4870]: E0216 17:17:54.534455 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-lvqfx" podUID="e7030097-1d81-471d-8731-13a271f38050" Feb 16 17:17:54 crc kubenswrapper[4870]: E0216 17:17:54.534627 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wr4hc" podUID="21ffa52a-74b8-444f-a1b6-95dfb4096974" Feb 16 17:17:54 crc kubenswrapper[4870]: I0216 17:17:54.549606 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4rjv" event={"ID":"2a0ef8b9-a1a9-4014-b5dd-b7356c6b411c","Type":"ContainerStarted","Data":"34afeb02da13e31c86bfe3e5d15eaa943e3d34119f333c2578fb22720b25de29"} Feb 16 17:17:54 crc kubenswrapper[4870]: I0216 17:17:54.553011 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8cf9q" event={"ID":"feb3d8e0-ace7-4aa3-9621-e56d57e7b510","Type":"ContainerStarted","Data":"5ab543e3fae17a16a50a50eff8c4233cc24f67626f2fee606dda73105785b85a"} Feb 16 17:17:55 crc kubenswrapper[4870]: I0216 17:17:55.175601 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d8420fa-9cbd-47f7-a252-a187de8515cd-cert\") pod \"infra-operator-controller-manager-79d975b745-4cm6m\" (UID: \"5d8420fa-9cbd-47f7-a252-a187de8515cd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m" Feb 16 17:17:55 crc kubenswrapper[4870]: E0216 17:17:55.184228 4870 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 17:17:55 crc kubenswrapper[4870]: E0216 17:17:55.184309 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d8420fa-9cbd-47f7-a252-a187de8515cd-cert podName:5d8420fa-9cbd-47f7-a252-a187de8515cd nodeName:}" failed. No retries permitted until 2026-02-16 17:17:59.184288389 +0000 UTC m=+1083.667752773 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d8420fa-9cbd-47f7-a252-a187de8515cd-cert") pod "infra-operator-controller-manager-79d975b745-4cm6m" (UID: "5d8420fa-9cbd-47f7-a252-a187de8515cd") : secret "infra-operator-webhook-server-cert" not found Feb 16 17:17:55 crc kubenswrapper[4870]: E0216 17:17:55.564095 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d6pgp" podUID="5d2f7cba-dd0c-43f0-9b4b-a2097c4a457a" Feb 16 17:17:55 crc kubenswrapper[4870]: E0216 17:17:55.564683 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wr4hc" podUID="21ffa52a-74b8-444f-a1b6-95dfb4096974" Feb 16 17:17:55 crc kubenswrapper[4870]: E0216 17:17:55.565037 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-bfq8r" podUID="256aea85-8852-4a34-98ef-3c9e07b30453" Feb 16 17:17:55 crc kubenswrapper[4870]: E0216 17:17:55.565181 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-lvqfx" podUID="e7030097-1d81-471d-8731-13a271f38050" Feb 16 17:17:55 crc kubenswrapper[4870]: E0216 17:17:55.567094 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.64:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-hjhdz" podUID="3abc2c2d-aaa8-42a3-876f-1107127dab28" Feb 16 17:17:55 crc kubenswrapper[4870]: I0216 17:17:55.908100 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5\" (UID: \"5f7c2918-26fd-46fb-bae6-52fbdd3eded7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" Feb 16 17:17:55 crc kubenswrapper[4870]: E0216 17:17:55.908212 4870 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:17:55 crc kubenswrapper[4870]: E0216 17:17:55.908272 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-cert podName:5f7c2918-26fd-46fb-bae6-52fbdd3eded7 nodeName:}" failed. No retries permitted until 2026-02-16 17:17:59.908255627 +0000 UTC m=+1084.391720011 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" (UID: "5f7c2918-26fd-46fb-bae6-52fbdd3eded7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:17:56 crc kubenswrapper[4870]: I0216 17:17:56.211981 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-metrics-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-f84j7\" (UID: \"44780e56-bccf-4440-b6c6-0333808b2e02\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:17:56 crc kubenswrapper[4870]: I0216 17:17:56.212054 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-webhook-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-f84j7\" (UID: \"44780e56-bccf-4440-b6c6-0333808b2e02\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:17:56 crc kubenswrapper[4870]: E0216 17:17:56.212172 4870 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 17:17:56 crc kubenswrapper[4870]: E0216 17:17:56.212194 4870 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 17:17:56 crc kubenswrapper[4870]: E0216 17:17:56.212253 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-metrics-certs podName:44780e56-bccf-4440-b6c6-0333808b2e02 nodeName:}" failed. No retries permitted until 2026-02-16 17:18:00.21223568 +0000 UTC m=+1084.695700064 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-metrics-certs") pod "openstack-operator-controller-manager-6f58b764dd-f84j7" (UID: "44780e56-bccf-4440-b6c6-0333808b2e02") : secret "metrics-server-cert" not found Feb 16 17:17:56 crc kubenswrapper[4870]: E0216 17:17:56.212271 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-webhook-certs podName:44780e56-bccf-4440-b6c6-0333808b2e02 nodeName:}" failed. No retries permitted until 2026-02-16 17:18:00.21226238 +0000 UTC m=+1084.695726754 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-webhook-certs") pod "openstack-operator-controller-manager-6f58b764dd-f84j7" (UID: "44780e56-bccf-4440-b6c6-0333808b2e02") : secret "webhook-server-cert" not found Feb 16 17:17:59 crc kubenswrapper[4870]: I0216 17:17:59.271458 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d8420fa-9cbd-47f7-a252-a187de8515cd-cert\") pod \"infra-operator-controller-manager-79d975b745-4cm6m\" (UID: \"5d8420fa-9cbd-47f7-a252-a187de8515cd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m" Feb 16 17:17:59 crc kubenswrapper[4870]: E0216 17:17:59.271924 4870 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 17:17:59 crc kubenswrapper[4870]: E0216 17:17:59.271994 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d8420fa-9cbd-47f7-a252-a187de8515cd-cert podName:5d8420fa-9cbd-47f7-a252-a187de8515cd nodeName:}" failed. No retries permitted until 2026-02-16 17:18:07.271979394 +0000 UTC m=+1091.755443778 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d8420fa-9cbd-47f7-a252-a187de8515cd-cert") pod "infra-operator-controller-manager-79d975b745-4cm6m" (UID: "5d8420fa-9cbd-47f7-a252-a187de8515cd") : secret "infra-operator-webhook-server-cert" not found Feb 16 17:17:59 crc kubenswrapper[4870]: I0216 17:17:59.988375 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5\" (UID: \"5f7c2918-26fd-46fb-bae6-52fbdd3eded7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" Feb 16 17:17:59 crc kubenswrapper[4870]: E0216 17:17:59.988569 4870 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:17:59 crc kubenswrapper[4870]: E0216 17:17:59.989069 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-cert podName:5f7c2918-26fd-46fb-bae6-52fbdd3eded7 nodeName:}" failed. No retries permitted until 2026-02-16 17:18:07.989037798 +0000 UTC m=+1092.472502182 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" (UID: "5f7c2918-26fd-46fb-bae6-52fbdd3eded7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:18:00 crc kubenswrapper[4870]: I0216 17:18:00.294356 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-metrics-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-f84j7\" (UID: \"44780e56-bccf-4440-b6c6-0333808b2e02\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:18:00 crc kubenswrapper[4870]: I0216 17:18:00.294467 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-webhook-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-f84j7\" (UID: \"44780e56-bccf-4440-b6c6-0333808b2e02\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:18:00 crc kubenswrapper[4870]: E0216 17:18:00.294547 4870 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 17:18:00 crc kubenswrapper[4870]: E0216 17:18:00.294666 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-metrics-certs podName:44780e56-bccf-4440-b6c6-0333808b2e02 nodeName:}" failed. No retries permitted until 2026-02-16 17:18:08.294643775 +0000 UTC m=+1092.778108159 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-metrics-certs") pod "openstack-operator-controller-manager-6f58b764dd-f84j7" (UID: "44780e56-bccf-4440-b6c6-0333808b2e02") : secret "metrics-server-cert" not found Feb 16 17:18:00 crc kubenswrapper[4870]: E0216 17:18:00.294729 4870 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 17:18:00 crc kubenswrapper[4870]: E0216 17:18:00.294831 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-webhook-certs podName:44780e56-bccf-4440-b6c6-0333808b2e02 nodeName:}" failed. No retries permitted until 2026-02-16 17:18:08.29480021 +0000 UTC m=+1092.778264774 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-webhook-certs") pod "openstack-operator-controller-manager-6f58b764dd-f84j7" (UID: "44780e56-bccf-4440-b6c6-0333808b2e02") : secret "webhook-server-cert" not found Feb 16 17:18:04 crc kubenswrapper[4870]: E0216 17:18:04.686522 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" Feb 16 17:18:04 crc kubenswrapper[4870]: E0216 17:18:04.687367 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vq8xs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-5d946d989d-68kws_openstack-operators(a104184d-e08b-46ef-8595-6b21f2826f9a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:18:04 crc kubenswrapper[4870]: E0216 17:18:04.688794 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-68kws" podUID="a104184d-e08b-46ef-8595-6b21f2826f9a" Feb 16 17:18:05 crc kubenswrapper[4870]: I0216 17:18:05.366320 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:18:05 crc kubenswrapper[4870]: I0216 17:18:05.366597 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:18:05 crc kubenswrapper[4870]: E0216 17:18:05.631855 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-68kws" podUID="a104184d-e08b-46ef-8595-6b21f2826f9a" Feb 16 17:18:05 crc kubenswrapper[4870]: E0216 17:18:05.724459 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 16 17:18:05 crc kubenswrapper[4870]: E0216 17:18:05.725552 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mrvhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-ns7zz_openstack-operators(2c0e615e-3bf7-4627-b800-af60affed5f5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:18:05 crc kubenswrapper[4870]: E0216 17:18:05.726897 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ns7zz" podUID="2c0e615e-3bf7-4627-b800-af60affed5f5" Feb 16 17:18:06 crc kubenswrapper[4870]: E0216 17:18:06.390504 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" Feb 16 17:18:06 crc kubenswrapper[4870]: E0216 17:18:06.391188 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqqhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d8bf5c495-v7gn2_openstack-operators(0d52da13-82bf-439f-ac03-bbf3f539de78): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:18:06 crc kubenswrapper[4870]: E0216 17:18:06.392431 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v7gn2" podUID="0d52da13-82bf-439f-ac03-bbf3f539de78" Feb 16 17:18:06 crc kubenswrapper[4870]: E0216 17:18:06.644262 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ns7zz" podUID="2c0e615e-3bf7-4627-b800-af60affed5f5" Feb 16 17:18:06 crc kubenswrapper[4870]: E0216 17:18:06.644512 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v7gn2" podUID="0d52da13-82bf-439f-ac03-bbf3f539de78" Feb 16 17:18:07 crc kubenswrapper[4870]: E0216 17:18:07.019146 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 16 17:18:07 crc kubenswrapper[4870]: E0216 17:18:07.019442 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vnccx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-nx2h7_openstack-operators(a8be2317-5b27-4e2b-b403-66d75647fda1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:18:07 crc kubenswrapper[4870]: E0216 17:18:07.021134 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-nx2h7" podUID="a8be2317-5b27-4e2b-b403-66d75647fda1" Feb 16 17:18:07 crc kubenswrapper[4870]: I0216 17:18:07.323806 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d8420fa-9cbd-47f7-a252-a187de8515cd-cert\") pod \"infra-operator-controller-manager-79d975b745-4cm6m\" (UID: \"5d8420fa-9cbd-47f7-a252-a187de8515cd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m" Feb 16 17:18:07 crc kubenswrapper[4870]: E0216 17:18:07.323963 4870 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 17:18:07 crc kubenswrapper[4870]: E0216 17:18:07.324039 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d8420fa-9cbd-47f7-a252-a187de8515cd-cert podName:5d8420fa-9cbd-47f7-a252-a187de8515cd nodeName:}" failed. No retries permitted until 2026-02-16 17:18:23.324020242 +0000 UTC m=+1107.807484626 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d8420fa-9cbd-47f7-a252-a187de8515cd-cert") pod "infra-operator-controller-manager-79d975b745-4cm6m" (UID: "5d8420fa-9cbd-47f7-a252-a187de8515cd") : secret "infra-operator-webhook-server-cert" not found Feb 16 17:18:07 crc kubenswrapper[4870]: E0216 17:18:07.523921 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 16 17:18:07 crc kubenswrapper[4870]: E0216 17:18:07.524144 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w5c88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-cnmml_openstack-operators(98cc8d06-5e5f-406b-9212-0053c9c66238): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:18:07 crc kubenswrapper[4870]: E0216 17:18:07.525452 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cnmml" podUID="98cc8d06-5e5f-406b-9212-0053c9c66238" Feb 16 17:18:07 crc kubenswrapper[4870]: E0216 17:18:07.645289 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-nx2h7" podUID="a8be2317-5b27-4e2b-b403-66d75647fda1" Feb 16 17:18:07 crc kubenswrapper[4870]: E0216 17:18:07.646625 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cnmml" podUID="98cc8d06-5e5f-406b-9212-0053c9c66238" Feb 16 17:18:08 crc kubenswrapper[4870]: E0216 17:18:08.032162 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" Feb 16 17:18:08 crc kubenswrapper[4870]: E0216 17:18:08.032425 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jcrq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-8cf9q_openstack-operators(feb3d8e0-ace7-4aa3-9621-e56d57e7b510): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:18:08 crc kubenswrapper[4870]: E0216 17:18:08.035000 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8cf9q" podUID="feb3d8e0-ace7-4aa3-9621-e56d57e7b510" Feb 16 17:18:08 crc kubenswrapper[4870]: I0216 17:18:08.035470 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5\" (UID: \"5f7c2918-26fd-46fb-bae6-52fbdd3eded7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" Feb 16 17:18:08 crc kubenswrapper[4870]: E0216 17:18:08.035694 4870 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:18:08 crc kubenswrapper[4870]: E0216 17:18:08.035783 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-cert podName:5f7c2918-26fd-46fb-bae6-52fbdd3eded7 nodeName:}" failed. No retries permitted until 2026-02-16 17:18:24.035754637 +0000 UTC m=+1108.519219061 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" (UID: "5f7c2918-26fd-46fb-bae6-52fbdd3eded7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:18:08 crc kubenswrapper[4870]: I0216 17:18:08.345675 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-webhook-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-f84j7\" (UID: \"44780e56-bccf-4440-b6c6-0333808b2e02\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:18:08 crc kubenswrapper[4870]: I0216 17:18:08.345866 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-metrics-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-f84j7\" (UID: \"44780e56-bccf-4440-b6c6-0333808b2e02\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:18:08 crc kubenswrapper[4870]: I0216 17:18:08.351455 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-metrics-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-f84j7\" (UID: \"44780e56-bccf-4440-b6c6-0333808b2e02\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:18:08 crc kubenswrapper[4870]: I0216 17:18:08.353819 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/44780e56-bccf-4440-b6c6-0333808b2e02-webhook-certs\") pod \"openstack-operator-controller-manager-6f58b764dd-f84j7\" (UID: \"44780e56-bccf-4440-b6c6-0333808b2e02\") " pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:18:08 crc kubenswrapper[4870]: I0216 17:18:08.508702 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-9sbdw" Feb 16 17:18:08 crc kubenswrapper[4870]: I0216 17:18:08.517379 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:18:08 crc kubenswrapper[4870]: E0216 17:18:08.650734 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8cf9q" podUID="feb3d8e0-ace7-4aa3-9621-e56d57e7b510" Feb 16 17:18:08 crc kubenswrapper[4870]: E0216 17:18:08.654536 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 16 17:18:08 crc kubenswrapper[4870]: E0216 17:18:08.654703 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8blz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-hqjtw_openstack-operators(bb375b58-f7fa-4006-b087-cb06ea0cfc86): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:18:08 crc kubenswrapper[4870]: E0216 17:18:08.655859 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hqjtw" podUID="bb375b58-f7fa-4006-b087-cb06ea0cfc86" Feb 16 17:18:09 crc kubenswrapper[4870]: E0216 17:18:09.422698 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" Feb 16 17:18:09 crc kubenswrapper[4870]: E0216 17:18:09.422915 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jqq7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987464f4-t9hz5_openstack-operators(aef27f45-8abf-44f3-a290-c6d49dbfa1fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:18:09 crc kubenswrapper[4870]: E0216 17:18:09.424143 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-t9hz5" podUID="aef27f45-8abf-44f3-a290-c6d49dbfa1fd" Feb 16 17:18:09 crc kubenswrapper[4870]: E0216 17:18:09.681192 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hqjtw" podUID="bb375b58-f7fa-4006-b087-cb06ea0cfc86" Feb 16 17:18:09 crc kubenswrapper[4870]: E0216 17:18:09.681198 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df\\\"\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-t9hz5" podUID="aef27f45-8abf-44f3-a290-c6d49dbfa1fd" Feb 16 17:18:10 crc kubenswrapper[4870]: E0216 17:18:10.198396 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" Feb 16 17:18:10 crc kubenswrapper[4870]: E0216 17:18:10.198606 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-czdn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-868647ff47-cchkt_openstack-operators(e7d3d5ca-7088-46e0-88eb-bc8f1270b85d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:18:10 crc kubenswrapper[4870]: E0216 17:18:10.199768 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cchkt" podUID="e7d3d5ca-7088-46e0-88eb-bc8f1270b85d" Feb 16 17:18:10 crc kubenswrapper[4870]: E0216 17:18:10.684727 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cchkt" podUID="e7d3d5ca-7088-46e0-88eb-bc8f1270b85d" Feb 16 17:18:10 crc kubenswrapper[4870]: E0216 17:18:10.897453 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 16 17:18:10 crc kubenswrapper[4870]: E0216 17:18:10.897726 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ms2nw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-wzn2r_openstack-operators(3e3dfdb0-abc8-458b-811a-752f6bd9430e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:18:10 crc kubenswrapper[4870]: E0216 17:18:10.898895 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wzn2r" podUID="3e3dfdb0-abc8-458b-811a-752f6bd9430e" Feb 16 17:18:11 crc kubenswrapper[4870]: E0216 17:18:11.338537 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 16 17:18:11 crc kubenswrapper[4870]: E0216 17:18:11.338718 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9z74b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-k4rjv_openstack-operators(2a0ef8b9-a1a9-4014-b5dd-b7356c6b411c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:18:11 crc kubenswrapper[4870]: E0216 17:18:11.339830 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4rjv" podUID="2a0ef8b9-a1a9-4014-b5dd-b7356c6b411c" Feb 16 17:18:11 crc kubenswrapper[4870]: E0216 17:18:11.689212 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4rjv" podUID="2a0ef8b9-a1a9-4014-b5dd-b7356c6b411c" Feb 16 17:18:11 crc kubenswrapper[4870]: E0216 17:18:11.689347 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wzn2r" podUID="3e3dfdb0-abc8-458b-811a-752f6bd9430e" Feb 16 17:18:11 crc kubenswrapper[4870]: E0216 17:18:11.832617 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 16 17:18:11 crc kubenswrapper[4870]: E0216 17:18:11.832793 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dqgt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-b9czx_openstack-operators(dd026e8b-7f8a-4c07-9bee-84e0fe1e535f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:18:11 crc kubenswrapper[4870]: E0216 17:18:11.834018 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-b9czx" podUID="dd026e8b-7f8a-4c07-9bee-84e0fe1e535f" Feb 16 17:18:12 crc kubenswrapper[4870]: E0216 17:18:12.694580 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-b9czx" podUID="dd026e8b-7f8a-4c07-9bee-84e0fe1e535f" Feb 16 17:18:14 crc kubenswrapper[4870]: I0216 17:18:14.838265 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7"] Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.729769 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-bfq8r" event={"ID":"256aea85-8852-4a34-98ef-3c9e07b30453","Type":"ContainerStarted","Data":"8cd10abacf7170be5cef7bc6657a614e6ac4e6feac909f532567926a599b3dee"} Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.730781 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-bfq8r" Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.732250 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wr4hc" event={"ID":"21ffa52a-74b8-444f-a1b6-95dfb4096974","Type":"ContainerStarted","Data":"530c5f6621996dc958fe12550b6f54c661f3b987dc6b7e921756d71732445507"} Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.732727 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wr4hc" Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.734915 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-lvqfx" event={"ID":"e7030097-1d81-471d-8731-13a271f38050","Type":"ContainerStarted","Data":"ea19f79dfc100524b585f5e869e248ed8a745323ced0534c49298d79c2f230fb"} Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.735335 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-lvqfx" Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.736574 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" event={"ID":"44780e56-bccf-4440-b6c6-0333808b2e02","Type":"ContainerStarted","Data":"56227b57812766a50abb926f10c2e1d9042936df7d4c9c347729da4d365db9bd"} Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.736599 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" event={"ID":"44780e56-bccf-4440-b6c6-0333808b2e02","Type":"ContainerStarted","Data":"bf9547c1bcc902f30b96f4bb5122f2ab3164d3b132d6277c9ebf3cb784bab65c"} Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.736974 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.738088 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d6pgp" event={"ID":"5d2f7cba-dd0c-43f0-9b4b-a2097c4a457a","Type":"ContainerStarted","Data":"481577245edc78ea277f37751424dd106462f1d5eae5525975a1043c4ec44574"} Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.738469 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d6pgp" Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.739595 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-k468x" event={"ID":"65381e3b-70d2-4dbf-a1e5-279696c5cc09","Type":"ContainerStarted","Data":"ebf80ad6a205c1383a3fc413eeab6151c0bca89e1747952b40678b9da5e8447b"} Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.740036 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-k468x" Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.741128 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-58msl" event={"ID":"7a88f711-61b7-44b5-a82e-5c909efc50e9","Type":"ContainerStarted","Data":"e8f46015a15acf17aeed90e39c8c62fb84fa5ca8df3e2d139e053043b2df58f0"} Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.741518 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-58msl" Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.742810 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-gcgvr" event={"ID":"6d98461a-872e-4857-a100-f905a5231b83","Type":"ContainerStarted","Data":"9055ee26427782b155afc000ac1a51d3f53d15d5882015db33ce7172463bd883"} Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.743253 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-gcgvr" Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.744334 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-hjhdz" event={"ID":"3abc2c2d-aaa8-42a3-876f-1107127dab28","Type":"ContainerStarted","Data":"73253059f3cee64d8b1709f960e1e5ff7ef2ebd302482cf54215dbdba173d920"} Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.744692 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-hjhdz" Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.753751 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-bfq8r" podStartSLOduration=3.061619533 podStartE2EDuration="24.753735338s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="2026-02-16 17:17:53.434576621 +0000 UTC m=+1077.918041005" lastFinishedPulling="2026-02-16 17:18:15.126692426 +0000 UTC m=+1099.610156810" observedRunningTime="2026-02-16 17:18:15.751835094 +0000 UTC m=+1100.235299478" watchObservedRunningTime="2026-02-16 17:18:15.753735338 +0000 UTC m=+1100.237199722" Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.783600 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-gcgvr" podStartSLOduration=6.010659582 podStartE2EDuration="24.783577697s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="2026-02-16 17:17:53.040332989 +0000 UTC m=+1077.523797373" lastFinishedPulling="2026-02-16 17:18:11.813251104 +0000 UTC m=+1096.296715488" observedRunningTime="2026-02-16 17:18:15.776478167 +0000 UTC m=+1100.259942561" watchObservedRunningTime="2026-02-16 17:18:15.783577697 +0000 UTC m=+1100.267042081" Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.885205 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d6pgp" podStartSLOduration=3.155736051 podStartE2EDuration="24.885184666s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="2026-02-16 17:17:53.399081773 +0000 UTC m=+1077.882546157" lastFinishedPulling="2026-02-16 17:18:15.128530388 +0000 UTC m=+1099.611994772" observedRunningTime="2026-02-16 17:18:15.880307229 +0000 UTC m=+1100.363771613" watchObservedRunningTime="2026-02-16 17:18:15.885184666 +0000 UTC m=+1100.368649050" Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.886564 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" podStartSLOduration=24.886556424 podStartE2EDuration="24.886556424s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:18:15.853736611 +0000 UTC m=+1100.337200995" watchObservedRunningTime="2026-02-16 17:18:15.886556424 +0000 UTC m=+1100.370020808" Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.916388 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-58msl" podStartSLOduration=5.556377291 podStartE2EDuration="24.916366623s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="2026-02-16 17:17:53.023119965 +0000 UTC m=+1077.506584349" lastFinishedPulling="2026-02-16 17:18:12.383109287 +0000 UTC m=+1096.866573681" observedRunningTime="2026-02-16 17:18:15.906192047 +0000 UTC m=+1100.389656451" watchObservedRunningTime="2026-02-16 17:18:15.916366623 +0000 UTC m=+1100.399831007" Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.945159 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-hjhdz" podStartSLOduration=3.289055582 podStartE2EDuration="24.945113452s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="2026-02-16 17:17:53.472501438 +0000 UTC m=+1077.955965822" lastFinishedPulling="2026-02-16 17:18:15.128559308 +0000 UTC m=+1099.612023692" observedRunningTime="2026-02-16 17:18:15.940839302 +0000 UTC m=+1100.424303696" watchObservedRunningTime="2026-02-16 17:18:15.945113452 +0000 UTC m=+1100.428577836" Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.972937 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-lvqfx" podStartSLOduration=3.3472410679999998 podStartE2EDuration="24.972917384s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="2026-02-16 17:17:53.5308633 +0000 UTC m=+1078.014327684" lastFinishedPulling="2026-02-16 17:18:15.156539616 +0000 UTC m=+1099.640004000" observedRunningTime="2026-02-16 17:18:15.966704449 +0000 UTC m=+1100.450168823" watchObservedRunningTime="2026-02-16 17:18:15.972917384 +0000 UTC m=+1100.456381768" Feb 16 17:18:15 crc kubenswrapper[4870]: I0216 17:18:15.986785 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-k468x" podStartSLOduration=4.9254978529999995 podStartE2EDuration="24.986760294s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="2026-02-16 17:17:52.876571383 +0000 UTC m=+1077.360035777" lastFinishedPulling="2026-02-16 17:18:12.937833834 +0000 UTC m=+1097.421298218" observedRunningTime="2026-02-16 17:18:15.978328736 +0000 UTC m=+1100.461793120" watchObservedRunningTime="2026-02-16 17:18:15.986760294 +0000 UTC m=+1100.470224688" Feb 16 17:18:16 crc kubenswrapper[4870]: I0216 17:18:16.011432 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wr4hc" podStartSLOduration=3.359730519 podStartE2EDuration="25.011416217s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="2026-02-16 17:17:53.430568398 +0000 UTC m=+1077.914032782" lastFinishedPulling="2026-02-16 17:18:15.082254096 +0000 UTC m=+1099.565718480" observedRunningTime="2026-02-16 17:18:16.010495071 +0000 UTC m=+1100.493959465" watchObservedRunningTime="2026-02-16 17:18:16.011416217 +0000 UTC m=+1100.494880601" Feb 16 17:18:19 crc kubenswrapper[4870]: I0216 17:18:19.791276 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-68kws" event={"ID":"a104184d-e08b-46ef-8595-6b21f2826f9a","Type":"ContainerStarted","Data":"eebe428606ac62227045f8815c5d50a6dd850c2048298dd42311c2212d187baa"} Feb 16 17:18:19 crc kubenswrapper[4870]: I0216 17:18:19.792335 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-68kws" Feb 16 17:18:19 crc kubenswrapper[4870]: I0216 17:18:19.806208 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-68kws" podStartSLOduration=2.134708255 podStartE2EDuration="28.806175171s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="2026-02-16 17:17:52.953110275 +0000 UTC m=+1077.436574659" lastFinishedPulling="2026-02-16 17:18:19.624577191 +0000 UTC m=+1104.108041575" observedRunningTime="2026-02-16 17:18:19.803807544 +0000 UTC m=+1104.287271928" watchObservedRunningTime="2026-02-16 17:18:19.806175171 +0000 UTC m=+1104.289639555" Feb 16 17:18:21 crc kubenswrapper[4870]: I0216 17:18:21.675788 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-k468x" Feb 16 17:18:21 crc kubenswrapper[4870]: I0216 17:18:21.804280 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ns7zz" event={"ID":"2c0e615e-3bf7-4627-b800-af60affed5f5","Type":"ContainerStarted","Data":"0568d135d58ad095967b8d769c01e1f6f8028b6bce209eeba73209c2b2fd261e"} Feb 16 17:18:21 crc kubenswrapper[4870]: I0216 17:18:21.804464 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ns7zz" Feb 16 17:18:21 crc kubenswrapper[4870]: I0216 17:18:21.819352 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ns7zz" podStartSLOduration=3.24458894 podStartE2EDuration="30.819335749s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="2026-02-16 17:17:53.040672349 +0000 UTC m=+1077.524136733" lastFinishedPulling="2026-02-16 17:18:20.615419158 +0000 UTC m=+1105.098883542" observedRunningTime="2026-02-16 17:18:21.818617939 +0000 UTC m=+1106.302082323" watchObservedRunningTime="2026-02-16 17:18:21.819335749 +0000 UTC m=+1106.302800133" Feb 16 17:18:21 crc kubenswrapper[4870]: I0216 17:18:21.864509 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-gcgvr" Feb 16 17:18:21 crc kubenswrapper[4870]: I0216 17:18:21.986039 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-58msl" Feb 16 17:18:22 crc kubenswrapper[4870]: I0216 17:18:22.204573 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-bfq8r" Feb 16 17:18:22 crc kubenswrapper[4870]: I0216 17:18:22.449961 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-d6pgp" Feb 16 17:18:22 crc kubenswrapper[4870]: I0216 17:18:22.633334 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-wr4hc" Feb 16 17:18:22 crc kubenswrapper[4870]: I0216 17:18:22.755080 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-hjhdz" Feb 16 17:18:22 crc kubenswrapper[4870]: I0216 17:18:22.761701 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-lvqfx" Feb 16 17:18:22 crc kubenswrapper[4870]: I0216 17:18:22.821936 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cnmml" event={"ID":"98cc8d06-5e5f-406b-9212-0053c9c66238","Type":"ContainerStarted","Data":"fff04c5895914aca94524add6afd92bf65725717bf5e2b26a75819b07d146f0b"} Feb 16 17:18:22 crc kubenswrapper[4870]: I0216 17:18:22.822479 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cnmml" Feb 16 17:18:22 crc kubenswrapper[4870]: I0216 17:18:22.849858 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cnmml" podStartSLOduration=3.516071319 podStartE2EDuration="31.849836782s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="2026-02-16 17:17:53.380903461 +0000 UTC m=+1077.864367855" lastFinishedPulling="2026-02-16 17:18:21.714668934 +0000 UTC m=+1106.198133318" observedRunningTime="2026-02-16 17:18:22.843517554 +0000 UTC m=+1107.326981938" watchObservedRunningTime="2026-02-16 17:18:22.849836782 +0000 UTC m=+1107.333301166" Feb 16 17:18:23 crc kubenswrapper[4870]: I0216 17:18:23.409455 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d8420fa-9cbd-47f7-a252-a187de8515cd-cert\") pod \"infra-operator-controller-manager-79d975b745-4cm6m\" (UID: \"5d8420fa-9cbd-47f7-a252-a187de8515cd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m" Feb 16 17:18:23 crc kubenswrapper[4870]: I0216 17:18:23.418093 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d8420fa-9cbd-47f7-a252-a187de8515cd-cert\") pod \"infra-operator-controller-manager-79d975b745-4cm6m\" (UID: \"5d8420fa-9cbd-47f7-a252-a187de8515cd\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m" Feb 16 17:18:23 crc kubenswrapper[4870]: I0216 17:18:23.505467 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-wsdch" Feb 16 17:18:23 crc kubenswrapper[4870]: I0216 17:18:23.515075 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m" Feb 16 17:18:23 crc kubenswrapper[4870]: I0216 17:18:23.831620 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v7gn2" event={"ID":"0d52da13-82bf-439f-ac03-bbf3f539de78","Type":"ContainerStarted","Data":"c558fb1c2ec5b7603d46b120a964d0fc58a9c3232595525606869f32469a2886"} Feb 16 17:18:23 crc kubenswrapper[4870]: I0216 17:18:23.831910 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v7gn2" Feb 16 17:18:23 crc kubenswrapper[4870]: I0216 17:18:23.856323 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v7gn2" podStartSLOduration=3.127090384 podStartE2EDuration="32.856296508s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="2026-02-16 17:17:52.97852864 +0000 UTC m=+1077.461993024" lastFinishedPulling="2026-02-16 17:18:22.707734764 +0000 UTC m=+1107.191199148" observedRunningTime="2026-02-16 17:18:23.846904934 +0000 UTC m=+1108.330369338" watchObservedRunningTime="2026-02-16 17:18:23.856296508 +0000 UTC m=+1108.339760932" Feb 16 17:18:24 crc kubenswrapper[4870]: I0216 17:18:24.127835 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5\" (UID: \"5f7c2918-26fd-46fb-bae6-52fbdd3eded7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" Feb 16 17:18:24 crc kubenswrapper[4870]: I0216 17:18:24.135100 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5f7c2918-26fd-46fb-bae6-52fbdd3eded7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5\" (UID: \"5f7c2918-26fd-46fb-bae6-52fbdd3eded7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" Feb 16 17:18:24 crc kubenswrapper[4870]: I0216 17:18:24.230911 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-lqndj" Feb 16 17:18:24 crc kubenswrapper[4870]: I0216 17:18:24.237107 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" Feb 16 17:18:24 crc kubenswrapper[4870]: I0216 17:18:24.854197 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m"] Feb 16 17:18:24 crc kubenswrapper[4870]: I0216 17:18:24.859236 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-nx2h7" event={"ID":"a8be2317-5b27-4e2b-b403-66d75647fda1","Type":"ContainerStarted","Data":"3f18fd4d269352c43217be97483c757e89aa4f10303440d526ecef4668dc7c1d"} Feb 16 17:18:24 crc kubenswrapper[4870]: I0216 17:18:24.859392 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-nx2h7" Feb 16 17:18:24 crc kubenswrapper[4870]: I0216 17:18:24.862158 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wzn2r" event={"ID":"3e3dfdb0-abc8-458b-811a-752f6bd9430e","Type":"ContainerStarted","Data":"9c7792d738910ab657b91ef9a51cc91e1494fe2d1f1d66135554c8489938b137"} Feb 16 17:18:24 crc kubenswrapper[4870]: I0216 17:18:24.862502 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wzn2r" Feb 16 17:18:24 crc kubenswrapper[4870]: I0216 17:18:24.878355 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-nx2h7" podStartSLOduration=3.038288876 podStartE2EDuration="33.878341303s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="2026-02-16 17:17:53.376858107 +0000 UTC m=+1077.860322491" lastFinishedPulling="2026-02-16 17:18:24.216910524 +0000 UTC m=+1108.700374918" observedRunningTime="2026-02-16 17:18:24.877290424 +0000 UTC m=+1109.360754808" watchObservedRunningTime="2026-02-16 17:18:24.878341303 +0000 UTC m=+1109.361805677" Feb 16 17:18:24 crc kubenswrapper[4870]: I0216 17:18:24.928706 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wzn2r" podStartSLOduration=2.701841271 podStartE2EDuration="33.92868547s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="2026-02-16 17:17:53.382064034 +0000 UTC m=+1077.865528418" lastFinishedPulling="2026-02-16 17:18:24.608908233 +0000 UTC m=+1109.092372617" observedRunningTime="2026-02-16 17:18:24.918426221 +0000 UTC m=+1109.401890605" watchObservedRunningTime="2026-02-16 17:18:24.92868547 +0000 UTC m=+1109.412149854" Feb 16 17:18:24 crc kubenswrapper[4870]: I0216 17:18:24.940261 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5"] Feb 16 17:18:25 crc kubenswrapper[4870]: I0216 17:18:25.870915 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" event={"ID":"5f7c2918-26fd-46fb-bae6-52fbdd3eded7","Type":"ContainerStarted","Data":"46df3141c8ca54ff0fe626d9f9980b9b280d53e2020d8d3c7c21e2d135a37434"} Feb 16 17:18:25 crc kubenswrapper[4870]: I0216 17:18:25.873383 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-t9hz5" event={"ID":"aef27f45-8abf-44f3-a290-c6d49dbfa1fd","Type":"ContainerStarted","Data":"47efdab4b6a11ae2c63680d342c710490cd923f8864cf7702e6469e665405f41"} Feb 16 17:18:25 crc kubenswrapper[4870]: I0216 17:18:25.873593 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-t9hz5" Feb 16 17:18:25 crc kubenswrapper[4870]: I0216 17:18:25.874856 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cchkt" event={"ID":"e7d3d5ca-7088-46e0-88eb-bc8f1270b85d","Type":"ContainerStarted","Data":"d57bad29552fff86de56cf24e0b102565c44b6c75cd3cbdc823e86884cd5728b"} Feb 16 17:18:25 crc kubenswrapper[4870]: I0216 17:18:25.875115 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cchkt" Feb 16 17:18:25 crc kubenswrapper[4870]: I0216 17:18:25.876596 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m" event={"ID":"5d8420fa-9cbd-47f7-a252-a187de8515cd","Type":"ContainerStarted","Data":"358a79faabdff2a5e144c4d7f27925927ba1c5bb8668a62feda6e5186dddbf53"} Feb 16 17:18:25 crc kubenswrapper[4870]: I0216 17:18:25.878065 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-b9czx" event={"ID":"dd026e8b-7f8a-4c07-9bee-84e0fe1e535f","Type":"ContainerStarted","Data":"3ab6cd7b0fbb5128aa7a0f2ddc30b928ca4f50d8836ec7fae47eff042fc894f9"} Feb 16 17:18:25 crc kubenswrapper[4870]: I0216 17:18:25.878502 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-b9czx" Feb 16 17:18:25 crc kubenswrapper[4870]: I0216 17:18:25.882821 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4rjv" event={"ID":"2a0ef8b9-a1a9-4014-b5dd-b7356c6b411c","Type":"ContainerStarted","Data":"046c0eaa29f242daeb5df37eb74c2d7e4789172093cf92c188dbebf4720c8758"} Feb 16 17:18:25 crc kubenswrapper[4870]: I0216 17:18:25.884136 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8cf9q" event={"ID":"feb3d8e0-ace7-4aa3-9621-e56d57e7b510","Type":"ContainerStarted","Data":"3ac4dfc0b09347654fe9041d417690ebec4c103b6396fbb3ea16332d7ffea7fb"} Feb 16 17:18:25 crc kubenswrapper[4870]: I0216 17:18:25.884359 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8cf9q" Feb 16 17:18:25 crc kubenswrapper[4870]: I0216 17:18:25.885980 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hqjtw" event={"ID":"bb375b58-f7fa-4006-b087-cb06ea0cfc86","Type":"ContainerStarted","Data":"6d90c4680daa2a5812fe60c9a42eff2a25637b188fdd4cb8f553ff5d8a632444"} Feb 16 17:18:25 crc kubenswrapper[4870]: I0216 17:18:25.886754 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hqjtw" Feb 16 17:18:25 crc kubenswrapper[4870]: I0216 17:18:25.893493 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-t9hz5" podStartSLOduration=3.153624441 podStartE2EDuration="34.893477953s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="2026-02-16 17:17:53.023431574 +0000 UTC m=+1077.506895948" lastFinishedPulling="2026-02-16 17:18:24.763285066 +0000 UTC m=+1109.246749460" observedRunningTime="2026-02-16 17:18:25.892389492 +0000 UTC m=+1110.375853886" watchObservedRunningTime="2026-02-16 17:18:25.893477953 +0000 UTC m=+1110.376942337" Feb 16 17:18:25 crc kubenswrapper[4870]: I0216 17:18:25.913922 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hqjtw" podStartSLOduration=3.193156212 podStartE2EDuration="34.913905647s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="2026-02-16 17:17:53.04250517 +0000 UTC m=+1077.525969554" lastFinishedPulling="2026-02-16 17:18:24.763254605 +0000 UTC m=+1109.246718989" observedRunningTime="2026-02-16 17:18:25.910701797 +0000 UTC m=+1110.394166191" watchObservedRunningTime="2026-02-16 17:18:25.913905647 +0000 UTC m=+1110.397370031" Feb 16 17:18:25 crc kubenswrapper[4870]: I0216 17:18:25.938263 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-b9czx" podStartSLOduration=2.313970857 podStartE2EDuration="34.938245732s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="2026-02-16 17:17:53.068131291 +0000 UTC m=+1077.551595675" lastFinishedPulling="2026-02-16 17:18:25.692406166 +0000 UTC m=+1110.175870550" observedRunningTime="2026-02-16 17:18:25.933744986 +0000 UTC m=+1110.417209370" watchObservedRunningTime="2026-02-16 17:18:25.938245732 +0000 UTC m=+1110.421710116" Feb 16 17:18:25 crc kubenswrapper[4870]: I0216 17:18:25.954701 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8cf9q" podStartSLOduration=3.605652418 podStartE2EDuration="34.954681255s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="2026-02-16 17:17:53.517568706 +0000 UTC m=+1078.001033090" lastFinishedPulling="2026-02-16 17:18:24.866597543 +0000 UTC m=+1109.350061927" observedRunningTime="2026-02-16 17:18:25.946342 +0000 UTC m=+1110.429806374" watchObservedRunningTime="2026-02-16 17:18:25.954681255 +0000 UTC m=+1110.438145639" Feb 16 17:18:25 crc kubenswrapper[4870]: I0216 17:18:25.969032 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cchkt" podStartSLOduration=2.521651701 podStartE2EDuration="34.969017528s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="2026-02-16 17:17:52.564780001 +0000 UTC m=+1077.048244385" lastFinishedPulling="2026-02-16 17:18:25.012145828 +0000 UTC m=+1109.495610212" observedRunningTime="2026-02-16 17:18:25.966447656 +0000 UTC m=+1110.449912040" watchObservedRunningTime="2026-02-16 17:18:25.969017528 +0000 UTC m=+1110.452481912" Feb 16 17:18:25 crc kubenswrapper[4870]: I0216 17:18:25.988890 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k4rjv" podStartSLOduration=2.151302357 podStartE2EDuration="33.988875797s" podCreationTimestamp="2026-02-16 17:17:52 +0000 UTC" firstStartedPulling="2026-02-16 17:17:53.762022154 +0000 UTC m=+1078.245486538" lastFinishedPulling="2026-02-16 17:18:25.599595594 +0000 UTC m=+1110.083059978" observedRunningTime="2026-02-16 17:18:25.983913917 +0000 UTC m=+1110.467378301" watchObservedRunningTime="2026-02-16 17:18:25.988875797 +0000 UTC m=+1110.472340181" Feb 16 17:18:28 crc kubenswrapper[4870]: I0216 17:18:28.523641 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6f58b764dd-f84j7" Feb 16 17:18:28 crc kubenswrapper[4870]: I0216 17:18:28.932053 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m" event={"ID":"5d8420fa-9cbd-47f7-a252-a187de8515cd","Type":"ContainerStarted","Data":"ce14ec0f671afe227478865921aa6f74c3b9105c1b409dd07c16d415de9dedd7"} Feb 16 17:18:28 crc kubenswrapper[4870]: I0216 17:18:28.932453 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m" Feb 16 17:18:28 crc kubenswrapper[4870]: I0216 17:18:28.934795 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" event={"ID":"5f7c2918-26fd-46fb-bae6-52fbdd3eded7","Type":"ContainerStarted","Data":"fbde359dd1f4602c7cf122c505a4afa167b267534c634b07d50aa030c74469e3"} Feb 16 17:18:28 crc kubenswrapper[4870]: I0216 17:18:28.935520 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" Feb 16 17:18:28 crc kubenswrapper[4870]: I0216 17:18:28.973759 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m" podStartSLOduration=34.740949402 podStartE2EDuration="37.973740364s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="2026-02-16 17:18:24.877185661 +0000 UTC m=+1109.360650045" lastFinishedPulling="2026-02-16 17:18:28.109976623 +0000 UTC m=+1112.593441007" observedRunningTime="2026-02-16 17:18:28.968337042 +0000 UTC m=+1113.451801446" watchObservedRunningTime="2026-02-16 17:18:28.973740364 +0000 UTC m=+1113.457204748" Feb 16 17:18:29 crc kubenswrapper[4870]: I0216 17:18:29.009560 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" podStartSLOduration=34.913396523 podStartE2EDuration="38.009537471s" podCreationTimestamp="2026-02-16 17:17:51 +0000 UTC" firstStartedPulling="2026-02-16 17:18:25.007119856 +0000 UTC m=+1109.490584240" lastFinishedPulling="2026-02-16 17:18:28.103260804 +0000 UTC m=+1112.586725188" observedRunningTime="2026-02-16 17:18:29.001563327 +0000 UTC m=+1113.485027711" watchObservedRunningTime="2026-02-16 17:18:29.009537471 +0000 UTC m=+1113.493001855" Feb 16 17:18:31 crc kubenswrapper[4870]: I0216 17:18:31.473689 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cchkt" Feb 16 17:18:31 crc kubenswrapper[4870]: I0216 17:18:31.496800 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-68kws" Feb 16 17:18:31 crc kubenswrapper[4870]: I0216 17:18:31.525196 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-t9hz5" Feb 16 17:18:31 crc kubenswrapper[4870]: I0216 17:18:31.543866 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-v7gn2" Feb 16 17:18:31 crc kubenswrapper[4870]: I0216 17:18:31.615109 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hqjtw" Feb 16 17:18:31 crc kubenswrapper[4870]: I0216 17:18:31.757337 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ns7zz" Feb 16 17:18:31 crc kubenswrapper[4870]: I0216 17:18:31.915426 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-b9czx" Feb 16 17:18:32 crc kubenswrapper[4870]: I0216 17:18:32.104917 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-nx2h7" Feb 16 17:18:32 crc kubenswrapper[4870]: I0216 17:18:32.249007 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wzn2r" Feb 16 17:18:32 crc kubenswrapper[4870]: I0216 17:18:32.422592 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cnmml" Feb 16 17:18:32 crc kubenswrapper[4870]: I0216 17:18:32.893998 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8cf9q" Feb 16 17:18:33 crc kubenswrapper[4870]: I0216 17:18:33.521732 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-4cm6m" Feb 16 17:18:34 crc kubenswrapper[4870]: I0216 17:18:34.247661 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5" Feb 16 17:18:35 crc kubenswrapper[4870]: I0216 17:18:35.367247 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:18:35 crc kubenswrapper[4870]: I0216 17:18:35.367343 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:18:35 crc kubenswrapper[4870]: I0216 17:18:35.367411 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:18:35 crc kubenswrapper[4870]: I0216 17:18:35.368405 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c6cb73ad3168219aed3caa65ecbcfeaf20afa41eba328438ce91697a527d897b"} pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:18:35 crc kubenswrapper[4870]: I0216 17:18:35.368512 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" containerID="cri-o://c6cb73ad3168219aed3caa65ecbcfeaf20afa41eba328438ce91697a527d897b" gracePeriod=600 Feb 16 17:18:36 crc kubenswrapper[4870]: I0216 17:18:36.000610 4870 generic.go:334] "Generic (PLEG): container finished" podID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerID="c6cb73ad3168219aed3caa65ecbcfeaf20afa41eba328438ce91697a527d897b" exitCode=0 Feb 16 17:18:36 crc kubenswrapper[4870]: I0216 17:18:36.000656 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerDied","Data":"c6cb73ad3168219aed3caa65ecbcfeaf20afa41eba328438ce91697a527d897b"} Feb 16 17:18:36 crc kubenswrapper[4870]: I0216 17:18:36.000961 4870 scope.go:117] "RemoveContainer" containerID="ae9b5f8dd0e4675f99af74251a96ffd60d2f653f4d32feb06324bf4aaba5fef5" Feb 16 17:18:37 crc kubenswrapper[4870]: I0216 17:18:37.022346 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerStarted","Data":"a26cade4c570777b8e6874ae4e148783c7ff0c66ca799ca6a024730b89056882"} Feb 16 17:18:52 crc kubenswrapper[4870]: I0216 17:18:52.745580 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bptzj"] Feb 16 17:18:52 crc kubenswrapper[4870]: I0216 17:18:52.757708 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bptzj" Feb 16 17:18:52 crc kubenswrapper[4870]: I0216 17:18:52.761382 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 16 17:18:52 crc kubenswrapper[4870]: I0216 17:18:52.761440 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-nlt9f" Feb 16 17:18:52 crc kubenswrapper[4870]: I0216 17:18:52.761576 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 16 17:18:52 crc kubenswrapper[4870]: I0216 17:18:52.761675 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 16 17:18:52 crc kubenswrapper[4870]: I0216 17:18:52.763808 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bptzj"] Feb 16 17:18:52 crc kubenswrapper[4870]: I0216 17:18:52.823082 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-m46vp"] Feb 16 17:18:52 crc kubenswrapper[4870]: I0216 17:18:52.824132 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-m46vp" Feb 16 17:18:52 crc kubenswrapper[4870]: I0216 17:18:52.827034 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 16 17:18:52 crc kubenswrapper[4870]: I0216 17:18:52.839553 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-m46vp"] Feb 16 17:18:52 crc kubenswrapper[4870]: I0216 17:18:52.887845 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c84kn\" (UniqueName: \"kubernetes.io/projected/d8ed11f9-9a98-4e02-923f-91dc562a8886-kube-api-access-c84kn\") pod \"dnsmasq-dns-675f4bcbfc-bptzj\" (UID: \"d8ed11f9-9a98-4e02-923f-91dc562a8886\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bptzj" Feb 16 17:18:52 crc kubenswrapper[4870]: I0216 17:18:52.887911 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ed11f9-9a98-4e02-923f-91dc562a8886-config\") pod \"dnsmasq-dns-675f4bcbfc-bptzj\" (UID: \"d8ed11f9-9a98-4e02-923f-91dc562a8886\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bptzj" Feb 16 17:18:52 crc kubenswrapper[4870]: I0216 17:18:52.989218 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ed11f9-9a98-4e02-923f-91dc562a8886-config\") pod \"dnsmasq-dns-675f4bcbfc-bptzj\" (UID: \"d8ed11f9-9a98-4e02-923f-91dc562a8886\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bptzj" Feb 16 17:18:52 crc kubenswrapper[4870]: I0216 17:18:52.989275 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d35ff66-a700-46ba-9f68-728f2c0c1aa9-config\") pod \"dnsmasq-dns-78dd6ddcc-m46vp\" (UID: \"3d35ff66-a700-46ba-9f68-728f2c0c1aa9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m46vp" Feb 16 17:18:52 crc kubenswrapper[4870]: I0216 17:18:52.989306 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h8cs\" (UniqueName: \"kubernetes.io/projected/3d35ff66-a700-46ba-9f68-728f2c0c1aa9-kube-api-access-9h8cs\") pod \"dnsmasq-dns-78dd6ddcc-m46vp\" (UID: \"3d35ff66-a700-46ba-9f68-728f2c0c1aa9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m46vp" Feb 16 17:18:52 crc kubenswrapper[4870]: I0216 17:18:52.989369 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d35ff66-a700-46ba-9f68-728f2c0c1aa9-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-m46vp\" (UID: \"3d35ff66-a700-46ba-9f68-728f2c0c1aa9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m46vp" Feb 16 17:18:52 crc kubenswrapper[4870]: I0216 17:18:52.989393 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c84kn\" (UniqueName: \"kubernetes.io/projected/d8ed11f9-9a98-4e02-923f-91dc562a8886-kube-api-access-c84kn\") pod \"dnsmasq-dns-675f4bcbfc-bptzj\" (UID: \"d8ed11f9-9a98-4e02-923f-91dc562a8886\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bptzj" Feb 16 17:18:52 crc kubenswrapper[4870]: I0216 17:18:52.990297 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ed11f9-9a98-4e02-923f-91dc562a8886-config\") pod \"dnsmasq-dns-675f4bcbfc-bptzj\" (UID: \"d8ed11f9-9a98-4e02-923f-91dc562a8886\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bptzj" Feb 16 17:18:53 crc kubenswrapper[4870]: I0216 17:18:53.011575 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c84kn\" (UniqueName: \"kubernetes.io/projected/d8ed11f9-9a98-4e02-923f-91dc562a8886-kube-api-access-c84kn\") pod \"dnsmasq-dns-675f4bcbfc-bptzj\" (UID: \"d8ed11f9-9a98-4e02-923f-91dc562a8886\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bptzj" Feb 16 17:18:53 crc kubenswrapper[4870]: I0216 17:18:53.075604 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bptzj" Feb 16 17:18:53 crc kubenswrapper[4870]: I0216 17:18:53.091016 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d35ff66-a700-46ba-9f68-728f2c0c1aa9-config\") pod \"dnsmasq-dns-78dd6ddcc-m46vp\" (UID: \"3d35ff66-a700-46ba-9f68-728f2c0c1aa9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m46vp" Feb 16 17:18:53 crc kubenswrapper[4870]: I0216 17:18:53.091087 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h8cs\" (UniqueName: \"kubernetes.io/projected/3d35ff66-a700-46ba-9f68-728f2c0c1aa9-kube-api-access-9h8cs\") pod \"dnsmasq-dns-78dd6ddcc-m46vp\" (UID: \"3d35ff66-a700-46ba-9f68-728f2c0c1aa9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m46vp" Feb 16 17:18:53 crc kubenswrapper[4870]: I0216 17:18:53.091162 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d35ff66-a700-46ba-9f68-728f2c0c1aa9-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-m46vp\" (UID: \"3d35ff66-a700-46ba-9f68-728f2c0c1aa9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m46vp" Feb 16 17:18:53 crc kubenswrapper[4870]: I0216 17:18:53.092188 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d35ff66-a700-46ba-9f68-728f2c0c1aa9-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-m46vp\" (UID: \"3d35ff66-a700-46ba-9f68-728f2c0c1aa9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m46vp" Feb 16 17:18:53 crc kubenswrapper[4870]: I0216 17:18:53.092553 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d35ff66-a700-46ba-9f68-728f2c0c1aa9-config\") pod \"dnsmasq-dns-78dd6ddcc-m46vp\" (UID: \"3d35ff66-a700-46ba-9f68-728f2c0c1aa9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m46vp" Feb 16 17:18:53 crc kubenswrapper[4870]: I0216 17:18:53.110037 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h8cs\" (UniqueName: \"kubernetes.io/projected/3d35ff66-a700-46ba-9f68-728f2c0c1aa9-kube-api-access-9h8cs\") pod \"dnsmasq-dns-78dd6ddcc-m46vp\" (UID: \"3d35ff66-a700-46ba-9f68-728f2c0c1aa9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m46vp" Feb 16 17:18:53 crc kubenswrapper[4870]: I0216 17:18:53.140202 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-m46vp" Feb 16 17:18:53 crc kubenswrapper[4870]: I0216 17:18:53.516922 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bptzj"] Feb 16 17:18:53 crc kubenswrapper[4870]: I0216 17:18:53.642403 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-m46vp"] Feb 16 17:18:54 crc kubenswrapper[4870]: I0216 17:18:54.155239 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-m46vp" event={"ID":"3d35ff66-a700-46ba-9f68-728f2c0c1aa9","Type":"ContainerStarted","Data":"7be663a1f9c83c0bbd150f726365800c6c13b36c78fde78f4f8e2e39aef3599e"} Feb 16 17:18:54 crc kubenswrapper[4870]: I0216 17:18:54.159139 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bptzj" event={"ID":"d8ed11f9-9a98-4e02-923f-91dc562a8886","Type":"ContainerStarted","Data":"c7062692afdf35c620e41898ece245585b4585fa1ae732de8b9b7afbb45f64fc"} Feb 16 17:18:55 crc kubenswrapper[4870]: I0216 17:18:55.523014 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bptzj"] Feb 16 17:18:55 crc kubenswrapper[4870]: I0216 17:18:55.570900 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zd6zl"] Feb 16 17:18:55 crc kubenswrapper[4870]: I0216 17:18:55.572449 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" Feb 16 17:18:55 crc kubenswrapper[4870]: I0216 17:18:55.581938 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zd6zl"] Feb 16 17:18:55 crc kubenswrapper[4870]: I0216 17:18:55.732017 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtrlj\" (UniqueName: \"kubernetes.io/projected/4882424e-0156-4f5d-b6fb-9f7a54d52ded-kube-api-access-qtrlj\") pod \"dnsmasq-dns-666b6646f7-zd6zl\" (UID: \"4882424e-0156-4f5d-b6fb-9f7a54d52ded\") " pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" Feb 16 17:18:55 crc kubenswrapper[4870]: I0216 17:18:55.732104 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4882424e-0156-4f5d-b6fb-9f7a54d52ded-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zd6zl\" (UID: \"4882424e-0156-4f5d-b6fb-9f7a54d52ded\") " pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" Feb 16 17:18:55 crc kubenswrapper[4870]: I0216 17:18:55.732137 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4882424e-0156-4f5d-b6fb-9f7a54d52ded-config\") pod \"dnsmasq-dns-666b6646f7-zd6zl\" (UID: \"4882424e-0156-4f5d-b6fb-9f7a54d52ded\") " pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" Feb 16 17:18:55 crc kubenswrapper[4870]: I0216 17:18:55.831598 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-m46vp"] Feb 16 17:18:55 crc kubenswrapper[4870]: I0216 17:18:55.833162 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtrlj\" (UniqueName: \"kubernetes.io/projected/4882424e-0156-4f5d-b6fb-9f7a54d52ded-kube-api-access-qtrlj\") pod \"dnsmasq-dns-666b6646f7-zd6zl\" (UID: \"4882424e-0156-4f5d-b6fb-9f7a54d52ded\") " pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" Feb 16 17:18:55 crc kubenswrapper[4870]: I0216 17:18:55.833235 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4882424e-0156-4f5d-b6fb-9f7a54d52ded-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zd6zl\" (UID: \"4882424e-0156-4f5d-b6fb-9f7a54d52ded\") " pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" Feb 16 17:18:55 crc kubenswrapper[4870]: I0216 17:18:55.833274 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4882424e-0156-4f5d-b6fb-9f7a54d52ded-config\") pod \"dnsmasq-dns-666b6646f7-zd6zl\" (UID: \"4882424e-0156-4f5d-b6fb-9f7a54d52ded\") " pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" Feb 16 17:18:55 crc kubenswrapper[4870]: I0216 17:18:55.834156 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4882424e-0156-4f5d-b6fb-9f7a54d52ded-config\") pod \"dnsmasq-dns-666b6646f7-zd6zl\" (UID: \"4882424e-0156-4f5d-b6fb-9f7a54d52ded\") " pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" Feb 16 17:18:55 crc kubenswrapper[4870]: I0216 17:18:55.835011 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4882424e-0156-4f5d-b6fb-9f7a54d52ded-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zd6zl\" (UID: \"4882424e-0156-4f5d-b6fb-9f7a54d52ded\") " pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" Feb 16 17:18:55 crc kubenswrapper[4870]: I0216 17:18:55.860750 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtrlj\" (UniqueName: \"kubernetes.io/projected/4882424e-0156-4f5d-b6fb-9f7a54d52ded-kube-api-access-qtrlj\") pod \"dnsmasq-dns-666b6646f7-zd6zl\" (UID: \"4882424e-0156-4f5d-b6fb-9f7a54d52ded\") " pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" Feb 16 17:18:55 crc kubenswrapper[4870]: I0216 17:18:55.864395 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5qck9"] Feb 16 17:18:55 crc kubenswrapper[4870]: I0216 17:18:55.865601 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" Feb 16 17:18:55 crc kubenswrapper[4870]: I0216 17:18:55.875544 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5qck9"] Feb 16 17:18:55 crc kubenswrapper[4870]: I0216 17:18:55.895688 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.041822 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/743bd071-f9bd-4948-b99b-cd3e29bfe49e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5qck9\" (UID: \"743bd071-f9bd-4948-b99b-cd3e29bfe49e\") " pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.041995 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pg9x\" (UniqueName: \"kubernetes.io/projected/743bd071-f9bd-4948-b99b-cd3e29bfe49e-kube-api-access-7pg9x\") pod \"dnsmasq-dns-57d769cc4f-5qck9\" (UID: \"743bd071-f9bd-4948-b99b-cd3e29bfe49e\") " pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.042089 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743bd071-f9bd-4948-b99b-cd3e29bfe49e-config\") pod \"dnsmasq-dns-57d769cc4f-5qck9\" (UID: \"743bd071-f9bd-4948-b99b-cd3e29bfe49e\") " pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.142987 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/743bd071-f9bd-4948-b99b-cd3e29bfe49e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5qck9\" (UID: \"743bd071-f9bd-4948-b99b-cd3e29bfe49e\") " pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.143027 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pg9x\" (UniqueName: \"kubernetes.io/projected/743bd071-f9bd-4948-b99b-cd3e29bfe49e-kube-api-access-7pg9x\") pod \"dnsmasq-dns-57d769cc4f-5qck9\" (UID: \"743bd071-f9bd-4948-b99b-cd3e29bfe49e\") " pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.143119 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743bd071-f9bd-4948-b99b-cd3e29bfe49e-config\") pod \"dnsmasq-dns-57d769cc4f-5qck9\" (UID: \"743bd071-f9bd-4948-b99b-cd3e29bfe49e\") " pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.143977 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743bd071-f9bd-4948-b99b-cd3e29bfe49e-config\") pod \"dnsmasq-dns-57d769cc4f-5qck9\" (UID: \"743bd071-f9bd-4948-b99b-cd3e29bfe49e\") " pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.144032 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/743bd071-f9bd-4948-b99b-cd3e29bfe49e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5qck9\" (UID: \"743bd071-f9bd-4948-b99b-cd3e29bfe49e\") " pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.172175 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pg9x\" (UniqueName: \"kubernetes.io/projected/743bd071-f9bd-4948-b99b-cd3e29bfe49e-kube-api-access-7pg9x\") pod \"dnsmasq-dns-57d769cc4f-5qck9\" (UID: \"743bd071-f9bd-4948-b99b-cd3e29bfe49e\") " pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.208174 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.395642 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zd6zl"] Feb 16 17:18:56 crc kubenswrapper[4870]: W0216 17:18:56.409768 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4882424e_0156_4f5d_b6fb_9f7a54d52ded.slice/crio-ae2a745b10d9e2becc43684d5a320746122100c2d18f54ce7112c30b893ea8f5 WatchSource:0}: Error finding container ae2a745b10d9e2becc43684d5a320746122100c2d18f54ce7112c30b893ea8f5: Status 404 returned error can't find the container with id ae2a745b10d9e2becc43684d5a320746122100c2d18f54ce7112c30b893ea8f5 Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.671040 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5qck9"] Feb 16 17:18:56 crc kubenswrapper[4870]: W0216 17:18:56.676158 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod743bd071_f9bd_4948_b99b_cd3e29bfe49e.slice/crio-bdb647931bf1a12d87fd4d6371a13368f747a2b522017b14ccdbd1d8ba0b92a7 WatchSource:0}: Error finding container bdb647931bf1a12d87fd4d6371a13368f747a2b522017b14ccdbd1d8ba0b92a7: Status 404 returned error can't find the container with id bdb647931bf1a12d87fd4d6371a13368f747a2b522017b14ccdbd1d8ba0b92a7 Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.729322 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.731088 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.735357 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.735704 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.735888 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.736802 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.737025 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ft57t" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.737196 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.737370 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.739324 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.853312 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.853359 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.853379 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.853451 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b5e1f6b0-0338-456f-a676-97270f46def2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5e1f6b0-0338-456f-a676-97270f46def2\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.853486 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.853507 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.853535 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vl8v\" (UniqueName: \"kubernetes.io/projected/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-kube-api-access-7vl8v\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.853569 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-config-data\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.853589 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.853613 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.853649 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.955417 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b5e1f6b0-0338-456f-a676-97270f46def2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5e1f6b0-0338-456f-a676-97270f46def2\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.955481 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.955534 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.955567 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vl8v\" (UniqueName: \"kubernetes.io/projected/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-kube-api-access-7vl8v\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.955591 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-config-data\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.955611 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.955635 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.955677 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.955735 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.955757 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.955781 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.957256 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.957755 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.958043 4870 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.958086 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b5e1f6b0-0338-456f-a676-97270f46def2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5e1f6b0-0338-456f-a676-97270f46def2\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6c4a286b76af7e4f544cab857c7a6aac7772706a00e4a71b578cfd65f9036ba0/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.958106 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.958754 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.961216 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.961536 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-config-data\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.961538 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.961852 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.963566 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.976163 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vl8v\" (UniqueName: \"kubernetes.io/projected/d027dcfc-cbb1-4c78-b55f-0ed148b1faad-kube-api-access-7vl8v\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:56 crc kubenswrapper[4870]: I0216 17:18:56.986396 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b5e1f6b0-0338-456f-a676-97270f46def2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5e1f6b0-0338-456f-a676-97270f46def2\") pod \"rabbitmq-server-0\" (UID: \"d027dcfc-cbb1-4c78-b55f-0ed148b1faad\") " pod="openstack/rabbitmq-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.049465 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.051273 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.052377 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.053372 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.053572 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.053937 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.054105 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.054232 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.054347 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-sd29m" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.054793 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.056788 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.159577 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/66aba020-76f1-4cf7-992b-0745bd3c3512-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.159635 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/66aba020-76f1-4cf7-992b-0745bd3c3512-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.159662 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/66aba020-76f1-4cf7-992b-0745bd3c3512-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.159791 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4cnw\" (UniqueName: \"kubernetes.io/projected/66aba020-76f1-4cf7-992b-0745bd3c3512-kube-api-access-q4cnw\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.159850 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/66aba020-76f1-4cf7-992b-0745bd3c3512-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.160031 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/66aba020-76f1-4cf7-992b-0745bd3c3512-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.160075 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/66aba020-76f1-4cf7-992b-0745bd3c3512-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.160133 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66aba020-76f1-4cf7-992b-0745bd3c3512-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.160156 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/66aba020-76f1-4cf7-992b-0745bd3c3512-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.160346 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-45547462-bdc8-4486-bdd0-4992cbc0c3d9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45547462-bdc8-4486-bdd0-4992cbc0c3d9\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.160406 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/66aba020-76f1-4cf7-992b-0745bd3c3512-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.205738 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" event={"ID":"4882424e-0156-4f5d-b6fb-9f7a54d52ded","Type":"ContainerStarted","Data":"ae2a745b10d9e2becc43684d5a320746122100c2d18f54ce7112c30b893ea8f5"} Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.208557 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" event={"ID":"743bd071-f9bd-4948-b99b-cd3e29bfe49e","Type":"ContainerStarted","Data":"bdb647931bf1a12d87fd4d6371a13368f747a2b522017b14ccdbd1d8ba0b92a7"} Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.262662 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/66aba020-76f1-4cf7-992b-0745bd3c3512-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.262708 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/66aba020-76f1-4cf7-992b-0745bd3c3512-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.263596 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66aba020-76f1-4cf7-992b-0745bd3c3512-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.263628 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/66aba020-76f1-4cf7-992b-0745bd3c3512-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.263685 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-45547462-bdc8-4486-bdd0-4992cbc0c3d9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45547462-bdc8-4486-bdd0-4992cbc0c3d9\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.263717 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/66aba020-76f1-4cf7-992b-0745bd3c3512-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.263793 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/66aba020-76f1-4cf7-992b-0745bd3c3512-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.263827 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/66aba020-76f1-4cf7-992b-0745bd3c3512-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.263854 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/66aba020-76f1-4cf7-992b-0745bd3c3512-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.263888 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4cnw\" (UniqueName: \"kubernetes.io/projected/66aba020-76f1-4cf7-992b-0745bd3c3512-kube-api-access-q4cnw\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.263915 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/66aba020-76f1-4cf7-992b-0745bd3c3512-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.265644 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/66aba020-76f1-4cf7-992b-0745bd3c3512-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.265715 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/66aba020-76f1-4cf7-992b-0745bd3c3512-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.266063 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66aba020-76f1-4cf7-992b-0745bd3c3512-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.266078 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/66aba020-76f1-4cf7-992b-0745bd3c3512-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.266298 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/66aba020-76f1-4cf7-992b-0745bd3c3512-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.268393 4870 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.268426 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-45547462-bdc8-4486-bdd0-4992cbc0c3d9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45547462-bdc8-4486-bdd0-4992cbc0c3d9\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d83266d72f3f80c1c6a74e89e5438cae15c08f9977e59cb47dc656e4af2d552c/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.273674 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/66aba020-76f1-4cf7-992b-0745bd3c3512-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.273737 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/66aba020-76f1-4cf7-992b-0745bd3c3512-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.273765 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/66aba020-76f1-4cf7-992b-0745bd3c3512-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.278179 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/66aba020-76f1-4cf7-992b-0745bd3c3512-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.286247 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4cnw\" (UniqueName: \"kubernetes.io/projected/66aba020-76f1-4cf7-992b-0745bd3c3512-kube-api-access-q4cnw\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.315899 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-45547462-bdc8-4486-bdd0-4992cbc0c3d9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-45547462-bdc8-4486-bdd0-4992cbc0c3d9\") pod \"rabbitmq-cell1-server-0\" (UID: \"66aba020-76f1-4cf7-992b-0745bd3c3512\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.317831 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 17:18:57 crc kubenswrapper[4870]: W0216 17:18:57.326140 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd027dcfc_cbb1_4c78_b55f_0ed148b1faad.slice/crio-c4b6140db7f6cd4ae4daff217d9a3ad049e932711d2d1a4bf1b66616921208bb WatchSource:0}: Error finding container c4b6140db7f6cd4ae4daff217d9a3ad049e932711d2d1a4bf1b66616921208bb: Status 404 returned error can't find the container with id c4b6140db7f6cd4ae4daff217d9a3ad049e932711d2d1a4bf1b66616921208bb Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.376503 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:18:57 crc kubenswrapper[4870]: I0216 17:18:57.807299 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 17:18:57 crc kubenswrapper[4870]: W0216 17:18:57.830092 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66aba020_76f1_4cf7_992b_0745bd3c3512.slice/crio-abe960851ee672f791b94f3db22a1f701998aa3c9ab1aa145ae9e82ab73c0a6c WatchSource:0}: Error finding container abe960851ee672f791b94f3db22a1f701998aa3c9ab1aa145ae9e82ab73c0a6c: Status 404 returned error can't find the container with id abe960851ee672f791b94f3db22a1f701998aa3c9ab1aa145ae9e82ab73c0a6c Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.178987 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.180197 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.185766 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.186047 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-7nlgn" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.186588 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.187065 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.190198 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.195634 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.246026 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"66aba020-76f1-4cf7-992b-0745bd3c3512","Type":"ContainerStarted","Data":"abe960851ee672f791b94f3db22a1f701998aa3c9ab1aa145ae9e82ab73c0a6c"} Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.246073 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d027dcfc-cbb1-4c78-b55f-0ed148b1faad","Type":"ContainerStarted","Data":"c4b6140db7f6cd4ae4daff217d9a3ad049e932711d2d1a4bf1b66616921208bb"} Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.278274 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6723230-3e6b-43cc-bda7-2aac2faa0e67-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.278421 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a6723230-3e6b-43cc-bda7-2aac2faa0e67-kolla-config\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.278505 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6723230-3e6b-43cc-bda7-2aac2faa0e67-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.278601 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5e40f22a-44a4-472f-a7b7-e58276ae38c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e40f22a-44a4-472f-a7b7-e58276ae38c2\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.278979 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a6723230-3e6b-43cc-bda7-2aac2faa0e67-config-data-default\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.279123 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6723230-3e6b-43cc-bda7-2aac2faa0e67-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.279189 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wq62\" (UniqueName: \"kubernetes.io/projected/a6723230-3e6b-43cc-bda7-2aac2faa0e67-kube-api-access-5wq62\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.279391 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a6723230-3e6b-43cc-bda7-2aac2faa0e67-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.380633 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5e40f22a-44a4-472f-a7b7-e58276ae38c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e40f22a-44a4-472f-a7b7-e58276ae38c2\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.380707 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a6723230-3e6b-43cc-bda7-2aac2faa0e67-config-data-default\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.380762 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6723230-3e6b-43cc-bda7-2aac2faa0e67-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.380823 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wq62\" (UniqueName: \"kubernetes.io/projected/a6723230-3e6b-43cc-bda7-2aac2faa0e67-kube-api-access-5wq62\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.380928 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a6723230-3e6b-43cc-bda7-2aac2faa0e67-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.380986 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6723230-3e6b-43cc-bda7-2aac2faa0e67-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.381029 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a6723230-3e6b-43cc-bda7-2aac2faa0e67-kolla-config\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.381052 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6723230-3e6b-43cc-bda7-2aac2faa0e67-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.382870 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a6723230-3e6b-43cc-bda7-2aac2faa0e67-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.383044 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a6723230-3e6b-43cc-bda7-2aac2faa0e67-kolla-config\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.383312 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a6723230-3e6b-43cc-bda7-2aac2faa0e67-config-data-default\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.385464 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6723230-3e6b-43cc-bda7-2aac2faa0e67-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.385940 4870 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.386019 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5e40f22a-44a4-472f-a7b7-e58276ae38c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e40f22a-44a4-472f-a7b7-e58276ae38c2\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5e3d59a7b5deb5b5bb9ed26ef98627fe8a74757b19294b3d5b8080c0dc9c5402/globalmount\"" pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.388299 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6723230-3e6b-43cc-bda7-2aac2faa0e67-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.388669 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6723230-3e6b-43cc-bda7-2aac2faa0e67-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.407400 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wq62\" (UniqueName: \"kubernetes.io/projected/a6723230-3e6b-43cc-bda7-2aac2faa0e67-kube-api-access-5wq62\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.442301 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5e40f22a-44a4-472f-a7b7-e58276ae38c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e40f22a-44a4-472f-a7b7-e58276ae38c2\") pod \"openstack-galera-0\" (UID: \"a6723230-3e6b-43cc-bda7-2aac2faa0e67\") " pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.542722 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 17:18:58 crc kubenswrapper[4870]: I0216 17:18:58.922918 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.243353 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a6723230-3e6b-43cc-bda7-2aac2faa0e67","Type":"ContainerStarted","Data":"d58c492e87c7a9ee0e9741985ac3e96d9b2eb542323bd6cb78e6c4f147fcccce"} Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.651317 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.654041 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.657212 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.657466 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.658065 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.659000 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-fgz27" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.664899 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.760174 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-18d7480c-efba-4a3b-ad2d-51195a5f92c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-18d7480c-efba-4a3b-ad2d-51195a5f92c7\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.760230 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29m5t\" (UniqueName: \"kubernetes.io/projected/1c107984-3d0e-4627-98a9-0830571e42fa-kube-api-access-29m5t\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.760277 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c107984-3d0e-4627-98a9-0830571e42fa-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.760295 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c107984-3d0e-4627-98a9-0830571e42fa-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.761099 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c107984-3d0e-4627-98a9-0830571e42fa-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.761166 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1c107984-3d0e-4627-98a9-0830571e42fa-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.761218 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1c107984-3d0e-4627-98a9-0830571e42fa-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.761245 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1c107984-3d0e-4627-98a9-0830571e42fa-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.764718 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.765765 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.768684 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.768780 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.768906 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-87ndv" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.781639 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.862924 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-18d7480c-efba-4a3b-ad2d-51195a5f92c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-18d7480c-efba-4a3b-ad2d-51195a5f92c7\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.863010 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29m5t\" (UniqueName: \"kubernetes.io/projected/1c107984-3d0e-4627-98a9-0830571e42fa-kube-api-access-29m5t\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.863054 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c107984-3d0e-4627-98a9-0830571e42fa-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.863072 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c107984-3d0e-4627-98a9-0830571e42fa-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.863113 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/858288c8-7418-43d3-ae1c-7974c170239d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"858288c8-7418-43d3-ae1c-7974c170239d\") " pod="openstack/memcached-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.863138 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c107984-3d0e-4627-98a9-0830571e42fa-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.863154 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/858288c8-7418-43d3-ae1c-7974c170239d-kolla-config\") pod \"memcached-0\" (UID: \"858288c8-7418-43d3-ae1c-7974c170239d\") " pod="openstack/memcached-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.863176 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/858288c8-7418-43d3-ae1c-7974c170239d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"858288c8-7418-43d3-ae1c-7974c170239d\") " pod="openstack/memcached-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.863202 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/858288c8-7418-43d3-ae1c-7974c170239d-config-data\") pod \"memcached-0\" (UID: \"858288c8-7418-43d3-ae1c-7974c170239d\") " pod="openstack/memcached-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.863222 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1c107984-3d0e-4627-98a9-0830571e42fa-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.863437 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc6dx\" (UniqueName: \"kubernetes.io/projected/858288c8-7418-43d3-ae1c-7974c170239d-kube-api-access-cc6dx\") pod \"memcached-0\" (UID: \"858288c8-7418-43d3-ae1c-7974c170239d\") " pod="openstack/memcached-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.863525 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1c107984-3d0e-4627-98a9-0830571e42fa-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.863563 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1c107984-3d0e-4627-98a9-0830571e42fa-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.863867 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1c107984-3d0e-4627-98a9-0830571e42fa-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.864642 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1c107984-3d0e-4627-98a9-0830571e42fa-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.864731 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1c107984-3d0e-4627-98a9-0830571e42fa-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.866181 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c107984-3d0e-4627-98a9-0830571e42fa-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.867517 4870 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.867554 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-18d7480c-efba-4a3b-ad2d-51195a5f92c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-18d7480c-efba-4a3b-ad2d-51195a5f92c7\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/215a02a9a940acf838fb9602b98d765fb0484258264d9e79bd24aac8916d586b/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.868357 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c107984-3d0e-4627-98a9-0830571e42fa-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.870823 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c107984-3d0e-4627-98a9-0830571e42fa-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.888362 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29m5t\" (UniqueName: \"kubernetes.io/projected/1c107984-3d0e-4627-98a9-0830571e42fa-kube-api-access-29m5t\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.892884 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-18d7480c-efba-4a3b-ad2d-51195a5f92c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-18d7480c-efba-4a3b-ad2d-51195a5f92c7\") pod \"openstack-cell1-galera-0\" (UID: \"1c107984-3d0e-4627-98a9-0830571e42fa\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.964807 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/858288c8-7418-43d3-ae1c-7974c170239d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"858288c8-7418-43d3-ae1c-7974c170239d\") " pod="openstack/memcached-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.964858 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/858288c8-7418-43d3-ae1c-7974c170239d-kolla-config\") pod \"memcached-0\" (UID: \"858288c8-7418-43d3-ae1c-7974c170239d\") " pod="openstack/memcached-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.964881 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/858288c8-7418-43d3-ae1c-7974c170239d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"858288c8-7418-43d3-ae1c-7974c170239d\") " pod="openstack/memcached-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.964905 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/858288c8-7418-43d3-ae1c-7974c170239d-config-data\") pod \"memcached-0\" (UID: \"858288c8-7418-43d3-ae1c-7974c170239d\") " pod="openstack/memcached-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.964924 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc6dx\" (UniqueName: \"kubernetes.io/projected/858288c8-7418-43d3-ae1c-7974c170239d-kube-api-access-cc6dx\") pod \"memcached-0\" (UID: \"858288c8-7418-43d3-ae1c-7974c170239d\") " pod="openstack/memcached-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.965898 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/858288c8-7418-43d3-ae1c-7974c170239d-kolla-config\") pod \"memcached-0\" (UID: \"858288c8-7418-43d3-ae1c-7974c170239d\") " pod="openstack/memcached-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.966077 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/858288c8-7418-43d3-ae1c-7974c170239d-config-data\") pod \"memcached-0\" (UID: \"858288c8-7418-43d3-ae1c-7974c170239d\") " pod="openstack/memcached-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.968923 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/858288c8-7418-43d3-ae1c-7974c170239d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"858288c8-7418-43d3-ae1c-7974c170239d\") " pod="openstack/memcached-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.969547 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/858288c8-7418-43d3-ae1c-7974c170239d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"858288c8-7418-43d3-ae1c-7974c170239d\") " pod="openstack/memcached-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.981053 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 17:18:59 crc kubenswrapper[4870]: I0216 17:18:59.991399 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc6dx\" (UniqueName: \"kubernetes.io/projected/858288c8-7418-43d3-ae1c-7974c170239d-kube-api-access-cc6dx\") pod \"memcached-0\" (UID: \"858288c8-7418-43d3-ae1c-7974c170239d\") " pod="openstack/memcached-0" Feb 16 17:19:00 crc kubenswrapper[4870]: I0216 17:19:00.081674 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 17:19:01 crc kubenswrapper[4870]: I0216 17:19:01.946102 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 17:19:01 crc kubenswrapper[4870]: I0216 17:19:01.948094 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 17:19:01 crc kubenswrapper[4870]: I0216 17:19:01.955438 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-tgxpp" Feb 16 17:19:01 crc kubenswrapper[4870]: I0216 17:19:01.985987 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 17:19:02 crc kubenswrapper[4870]: I0216 17:19:02.099778 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkxfq\" (UniqueName: \"kubernetes.io/projected/086322f7-5554-4a10-a1be-10622174e27f-kube-api-access-gkxfq\") pod \"kube-state-metrics-0\" (UID: \"086322f7-5554-4a10-a1be-10622174e27f\") " pod="openstack/kube-state-metrics-0" Feb 16 17:19:02 crc kubenswrapper[4870]: I0216 17:19:02.202787 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkxfq\" (UniqueName: \"kubernetes.io/projected/086322f7-5554-4a10-a1be-10622174e27f-kube-api-access-gkxfq\") pod \"kube-state-metrics-0\" (UID: \"086322f7-5554-4a10-a1be-10622174e27f\") " pod="openstack/kube-state-metrics-0" Feb 16 17:19:02 crc kubenswrapper[4870]: I0216 17:19:02.247773 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkxfq\" (UniqueName: \"kubernetes.io/projected/086322f7-5554-4a10-a1be-10622174e27f-kube-api-access-gkxfq\") pod \"kube-state-metrics-0\" (UID: \"086322f7-5554-4a10-a1be-10622174e27f\") " pod="openstack/kube-state-metrics-0" Feb 16 17:19:02 crc kubenswrapper[4870]: I0216 17:19:02.274211 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 17:19:02 crc kubenswrapper[4870]: I0216 17:19:02.821514 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 16 17:19:02 crc kubenswrapper[4870]: I0216 17:19:02.823627 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:02 crc kubenswrapper[4870]: I0216 17:19:02.828982 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 16 17:19:02 crc kubenswrapper[4870]: I0216 17:19:02.829257 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 16 17:19:02 crc kubenswrapper[4870]: I0216 17:19:02.829834 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 16 17:19:02 crc kubenswrapper[4870]: I0216 17:19:02.829874 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 16 17:19:02 crc kubenswrapper[4870]: I0216 17:19:02.830433 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-c6dzs" Feb 16 17:19:02 crc kubenswrapper[4870]: I0216 17:19:02.849770 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 16 17:19:02 crc kubenswrapper[4870]: I0216 17:19:02.916740 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/79e9de5e-117f-4d5e-bfee-bad481a8c0b8-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"79e9de5e-117f-4d5e-bfee-bad481a8c0b8\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:02 crc kubenswrapper[4870]: I0216 17:19:02.916808 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/79e9de5e-117f-4d5e-bfee-bad481a8c0b8-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"79e9de5e-117f-4d5e-bfee-bad481a8c0b8\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:02 crc kubenswrapper[4870]: I0216 17:19:02.916847 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/79e9de5e-117f-4d5e-bfee-bad481a8c0b8-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"79e9de5e-117f-4d5e-bfee-bad481a8c0b8\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:02 crc kubenswrapper[4870]: I0216 17:19:02.916873 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/79e9de5e-117f-4d5e-bfee-bad481a8c0b8-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"79e9de5e-117f-4d5e-bfee-bad481a8c0b8\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:02 crc kubenswrapper[4870]: I0216 17:19:02.917010 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/79e9de5e-117f-4d5e-bfee-bad481a8c0b8-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"79e9de5e-117f-4d5e-bfee-bad481a8c0b8\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:02 crc kubenswrapper[4870]: I0216 17:19:02.917068 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/79e9de5e-117f-4d5e-bfee-bad481a8c0b8-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"79e9de5e-117f-4d5e-bfee-bad481a8c0b8\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:02 crc kubenswrapper[4870]: I0216 17:19:02.917099 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm2k8\" (UniqueName: \"kubernetes.io/projected/79e9de5e-117f-4d5e-bfee-bad481a8c0b8-kube-api-access-cm2k8\") pod \"alertmanager-metric-storage-0\" (UID: \"79e9de5e-117f-4d5e-bfee-bad481a8c0b8\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.018427 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/79e9de5e-117f-4d5e-bfee-bad481a8c0b8-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"79e9de5e-117f-4d5e-bfee-bad481a8c0b8\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.018473 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/79e9de5e-117f-4d5e-bfee-bad481a8c0b8-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"79e9de5e-117f-4d5e-bfee-bad481a8c0b8\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.018507 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm2k8\" (UniqueName: \"kubernetes.io/projected/79e9de5e-117f-4d5e-bfee-bad481a8c0b8-kube-api-access-cm2k8\") pod \"alertmanager-metric-storage-0\" (UID: \"79e9de5e-117f-4d5e-bfee-bad481a8c0b8\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.018562 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/79e9de5e-117f-4d5e-bfee-bad481a8c0b8-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"79e9de5e-117f-4d5e-bfee-bad481a8c0b8\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.018594 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/79e9de5e-117f-4d5e-bfee-bad481a8c0b8-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"79e9de5e-117f-4d5e-bfee-bad481a8c0b8\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.018623 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/79e9de5e-117f-4d5e-bfee-bad481a8c0b8-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"79e9de5e-117f-4d5e-bfee-bad481a8c0b8\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.018649 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/79e9de5e-117f-4d5e-bfee-bad481a8c0b8-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"79e9de5e-117f-4d5e-bfee-bad481a8c0b8\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.019935 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/79e9de5e-117f-4d5e-bfee-bad481a8c0b8-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"79e9de5e-117f-4d5e-bfee-bad481a8c0b8\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.023673 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/79e9de5e-117f-4d5e-bfee-bad481a8c0b8-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"79e9de5e-117f-4d5e-bfee-bad481a8c0b8\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.025097 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/79e9de5e-117f-4d5e-bfee-bad481a8c0b8-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"79e9de5e-117f-4d5e-bfee-bad481a8c0b8\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.025590 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/79e9de5e-117f-4d5e-bfee-bad481a8c0b8-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"79e9de5e-117f-4d5e-bfee-bad481a8c0b8\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.026009 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/79e9de5e-117f-4d5e-bfee-bad481a8c0b8-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"79e9de5e-117f-4d5e-bfee-bad481a8c0b8\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.042701 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/79e9de5e-117f-4d5e-bfee-bad481a8c0b8-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"79e9de5e-117f-4d5e-bfee-bad481a8c0b8\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.043712 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm2k8\" (UniqueName: \"kubernetes.io/projected/79e9de5e-117f-4d5e-bfee-bad481a8c0b8-kube-api-access-cm2k8\") pod \"alertmanager-metric-storage-0\" (UID: \"79e9de5e-117f-4d5e-bfee-bad481a8c0b8\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.142397 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.262581 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.264504 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.271430 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-zxr4b" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.271662 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.271774 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.271885 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.272019 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.273064 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.273274 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.273426 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.276262 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.425329 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/523f77f6-c829-4d3d-99c1-45bafcb30ee3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.425383 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.425441 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2742\" (UniqueName: \"kubernetes.io/projected/523f77f6-c829-4d3d-99c1-45bafcb30ee3-kube-api-access-f2742\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.425469 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/523f77f6-c829-4d3d-99c1-45bafcb30ee3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.425495 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/523f77f6-c829-4d3d-99c1-45bafcb30ee3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.425511 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/523f77f6-c829-4d3d-99c1-45bafcb30ee3-config\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.425542 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/523f77f6-c829-4d3d-99c1-45bafcb30ee3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.425575 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/523f77f6-c829-4d3d-99c1-45bafcb30ee3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.425592 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/523f77f6-c829-4d3d-99c1-45bafcb30ee3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.425612 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/523f77f6-c829-4d3d-99c1-45bafcb30ee3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.527289 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/523f77f6-c829-4d3d-99c1-45bafcb30ee3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.528600 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/523f77f6-c829-4d3d-99c1-45bafcb30ee3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.528685 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.529134 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2742\" (UniqueName: \"kubernetes.io/projected/523f77f6-c829-4d3d-99c1-45bafcb30ee3-kube-api-access-f2742\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.530003 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/523f77f6-c829-4d3d-99c1-45bafcb30ee3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.530905 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/523f77f6-c829-4d3d-99c1-45bafcb30ee3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.531417 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/523f77f6-c829-4d3d-99c1-45bafcb30ee3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.532438 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/523f77f6-c829-4d3d-99c1-45bafcb30ee3-config\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.532827 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/523f77f6-c829-4d3d-99c1-45bafcb30ee3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.532982 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/523f77f6-c829-4d3d-99c1-45bafcb30ee3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.533082 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/523f77f6-c829-4d3d-99c1-45bafcb30ee3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.533188 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/523f77f6-c829-4d3d-99c1-45bafcb30ee3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.533098 4870 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.533459 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/db39f485af21a79151032c6fa9f638ff58e4b7e89021845f15a51ead92dc9627/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.534756 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/523f77f6-c829-4d3d-99c1-45bafcb30ee3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.537478 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/523f77f6-c829-4d3d-99c1-45bafcb30ee3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.538735 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/523f77f6-c829-4d3d-99c1-45bafcb30ee3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.540587 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/523f77f6-c829-4d3d-99c1-45bafcb30ee3-config\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.540765 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/523f77f6-c829-4d3d-99c1-45bafcb30ee3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.546384 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/523f77f6-c829-4d3d-99c1-45bafcb30ee3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.552761 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2742\" (UniqueName: \"kubernetes.io/projected/523f77f6-c829-4d3d-99c1-45bafcb30ee3-kube-api-access-f2742\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.569918 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6\") pod \"prometheus-metric-storage-0\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:03 crc kubenswrapper[4870]: I0216 17:19:03.588909 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.683450 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.685715 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.687750 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.688018 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-rwbdd" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.688262 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.688427 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.688481 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.691532 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.772998 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.773066 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.773091 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.773122 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d-config\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.773151 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p4hc\" (UniqueName: \"kubernetes.io/projected/6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d-kube-api-access-7p4hc\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.773209 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cbae7c6a-cd64-479c-814f-8d2c7d65f105\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cbae7c6a-cd64-479c-814f-8d2c7d65f105\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.773257 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.773284 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.779564 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ktsg2"] Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.780939 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.785902 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.786365 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.787197 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-47g5f" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.790063 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ktsg2"] Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.804342 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-rh6tb"] Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.806648 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.813524 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rh6tb"] Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.875367 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/34012c0b-1886-446c-983e-6a1351630186-var-lib\") pod \"ovn-controller-ovs-rh6tb\" (UID: \"34012c0b-1886-446c-983e-6a1351630186\") " pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.875441 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cbae7c6a-cd64-479c-814f-8d2c7d65f105\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cbae7c6a-cd64-479c-814f-8d2c7d65f105\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.875494 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/34012c0b-1886-446c-983e-6a1351630186-etc-ovs\") pod \"ovn-controller-ovs-rh6tb\" (UID: \"34012c0b-1886-446c-983e-6a1351630186\") " pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.875524 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.875548 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.875579 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f4b2faa-7ab7-40c8-a28f-d93749011dbe-var-run\") pod \"ovn-controller-ktsg2\" (UID: \"2f4b2faa-7ab7-40c8-a28f-d93749011dbe\") " pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.875602 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f4b2faa-7ab7-40c8-a28f-d93749011dbe-scripts\") pod \"ovn-controller-ktsg2\" (UID: \"2f4b2faa-7ab7-40c8-a28f-d93749011dbe\") " pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.875624 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/34012c0b-1886-446c-983e-6a1351630186-var-log\") pod \"ovn-controller-ovs-rh6tb\" (UID: \"34012c0b-1886-446c-983e-6a1351630186\") " pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.875645 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f4b2faa-7ab7-40c8-a28f-d93749011dbe-var-log-ovn\") pod \"ovn-controller-ktsg2\" (UID: \"2f4b2faa-7ab7-40c8-a28f-d93749011dbe\") " pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.875668 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f4b2faa-7ab7-40c8-a28f-d93749011dbe-var-run-ovn\") pod \"ovn-controller-ktsg2\" (UID: \"2f4b2faa-7ab7-40c8-a28f-d93749011dbe\") " pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.875683 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4b2faa-7ab7-40c8-a28f-d93749011dbe-combined-ca-bundle\") pod \"ovn-controller-ktsg2\" (UID: \"2f4b2faa-7ab7-40c8-a28f-d93749011dbe\") " pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.875702 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34012c0b-1886-446c-983e-6a1351630186-scripts\") pod \"ovn-controller-ovs-rh6tb\" (UID: \"34012c0b-1886-446c-983e-6a1351630186\") " pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.875727 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f4b2faa-7ab7-40c8-a28f-d93749011dbe-ovn-controller-tls-certs\") pod \"ovn-controller-ktsg2\" (UID: \"2f4b2faa-7ab7-40c8-a28f-d93749011dbe\") " pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.875744 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwwfv\" (UniqueName: \"kubernetes.io/projected/2f4b2faa-7ab7-40c8-a28f-d93749011dbe-kube-api-access-bwwfv\") pod \"ovn-controller-ktsg2\" (UID: \"2f4b2faa-7ab7-40c8-a28f-d93749011dbe\") " pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.875761 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88g27\" (UniqueName: \"kubernetes.io/projected/34012c0b-1886-446c-983e-6a1351630186-kube-api-access-88g27\") pod \"ovn-controller-ovs-rh6tb\" (UID: \"34012c0b-1886-446c-983e-6a1351630186\") " pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.875784 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.875801 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/34012c0b-1886-446c-983e-6a1351630186-var-run\") pod \"ovn-controller-ovs-rh6tb\" (UID: \"34012c0b-1886-446c-983e-6a1351630186\") " pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.875816 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.875833 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.875854 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d-config\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.875873 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p4hc\" (UniqueName: \"kubernetes.io/projected/6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d-kube-api-access-7p4hc\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.878449 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d-config\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.879237 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.879322 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.882518 4870 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.882573 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.882591 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cbae7c6a-cd64-479c-814f-8d2c7d65f105\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cbae7c6a-cd64-479c-814f-8d2c7d65f105\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d388047dc570f1068d85d0009a5896dab3e5aa6959883fea01618963cd704ded/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.882836 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.883434 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.895786 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p4hc\" (UniqueName: \"kubernetes.io/projected/6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d-kube-api-access-7p4hc\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.931647 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cbae7c6a-cd64-479c-814f-8d2c7d65f105\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cbae7c6a-cd64-479c-814f-8d2c7d65f105\") pod \"ovsdbserver-nb-0\" (UID: \"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.976739 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f4b2faa-7ab7-40c8-a28f-d93749011dbe-var-log-ovn\") pod \"ovn-controller-ktsg2\" (UID: \"2f4b2faa-7ab7-40c8-a28f-d93749011dbe\") " pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.976796 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f4b2faa-7ab7-40c8-a28f-d93749011dbe-var-run-ovn\") pod \"ovn-controller-ktsg2\" (UID: \"2f4b2faa-7ab7-40c8-a28f-d93749011dbe\") " pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.976811 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4b2faa-7ab7-40c8-a28f-d93749011dbe-combined-ca-bundle\") pod \"ovn-controller-ktsg2\" (UID: \"2f4b2faa-7ab7-40c8-a28f-d93749011dbe\") " pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.976831 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34012c0b-1886-446c-983e-6a1351630186-scripts\") pod \"ovn-controller-ovs-rh6tb\" (UID: \"34012c0b-1886-446c-983e-6a1351630186\") " pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.976855 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f4b2faa-7ab7-40c8-a28f-d93749011dbe-ovn-controller-tls-certs\") pod \"ovn-controller-ktsg2\" (UID: \"2f4b2faa-7ab7-40c8-a28f-d93749011dbe\") " pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.976886 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwwfv\" (UniqueName: \"kubernetes.io/projected/2f4b2faa-7ab7-40c8-a28f-d93749011dbe-kube-api-access-bwwfv\") pod \"ovn-controller-ktsg2\" (UID: \"2f4b2faa-7ab7-40c8-a28f-d93749011dbe\") " pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.976907 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88g27\" (UniqueName: \"kubernetes.io/projected/34012c0b-1886-446c-983e-6a1351630186-kube-api-access-88g27\") pod \"ovn-controller-ovs-rh6tb\" (UID: \"34012c0b-1886-446c-983e-6a1351630186\") " pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.976931 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/34012c0b-1886-446c-983e-6a1351630186-var-run\") pod \"ovn-controller-ovs-rh6tb\" (UID: \"34012c0b-1886-446c-983e-6a1351630186\") " pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.976994 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/34012c0b-1886-446c-983e-6a1351630186-var-lib\") pod \"ovn-controller-ovs-rh6tb\" (UID: \"34012c0b-1886-446c-983e-6a1351630186\") " pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.977027 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/34012c0b-1886-446c-983e-6a1351630186-etc-ovs\") pod \"ovn-controller-ovs-rh6tb\" (UID: \"34012c0b-1886-446c-983e-6a1351630186\") " pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.977088 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f4b2faa-7ab7-40c8-a28f-d93749011dbe-var-run\") pod \"ovn-controller-ktsg2\" (UID: \"2f4b2faa-7ab7-40c8-a28f-d93749011dbe\") " pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.977109 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f4b2faa-7ab7-40c8-a28f-d93749011dbe-scripts\") pod \"ovn-controller-ktsg2\" (UID: \"2f4b2faa-7ab7-40c8-a28f-d93749011dbe\") " pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.977130 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/34012c0b-1886-446c-983e-6a1351630186-var-log\") pod \"ovn-controller-ovs-rh6tb\" (UID: \"34012c0b-1886-446c-983e-6a1351630186\") " pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.977389 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f4b2faa-7ab7-40c8-a28f-d93749011dbe-var-log-ovn\") pod \"ovn-controller-ktsg2\" (UID: \"2f4b2faa-7ab7-40c8-a28f-d93749011dbe\") " pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.977445 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/34012c0b-1886-446c-983e-6a1351630186-var-log\") pod \"ovn-controller-ovs-rh6tb\" (UID: \"34012c0b-1886-446c-983e-6a1351630186\") " pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.977633 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f4b2faa-7ab7-40c8-a28f-d93749011dbe-var-run-ovn\") pod \"ovn-controller-ktsg2\" (UID: \"2f4b2faa-7ab7-40c8-a28f-d93749011dbe\") " pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.977799 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/34012c0b-1886-446c-983e-6a1351630186-var-run\") pod \"ovn-controller-ovs-rh6tb\" (UID: \"34012c0b-1886-446c-983e-6a1351630186\") " pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.978014 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/34012c0b-1886-446c-983e-6a1351630186-etc-ovs\") pod \"ovn-controller-ovs-rh6tb\" (UID: \"34012c0b-1886-446c-983e-6a1351630186\") " pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.978111 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/34012c0b-1886-446c-983e-6a1351630186-var-lib\") pod \"ovn-controller-ovs-rh6tb\" (UID: \"34012c0b-1886-446c-983e-6a1351630186\") " pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.978418 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f4b2faa-7ab7-40c8-a28f-d93749011dbe-var-run\") pod \"ovn-controller-ktsg2\" (UID: \"2f4b2faa-7ab7-40c8-a28f-d93749011dbe\") " pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.980242 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f4b2faa-7ab7-40c8-a28f-d93749011dbe-scripts\") pod \"ovn-controller-ktsg2\" (UID: \"2f4b2faa-7ab7-40c8-a28f-d93749011dbe\") " pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.980724 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4b2faa-7ab7-40c8-a28f-d93749011dbe-combined-ca-bundle\") pod \"ovn-controller-ktsg2\" (UID: \"2f4b2faa-7ab7-40c8-a28f-d93749011dbe\") " pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.981186 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34012c0b-1886-446c-983e-6a1351630186-scripts\") pod \"ovn-controller-ovs-rh6tb\" (UID: \"34012c0b-1886-446c-983e-6a1351630186\") " pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.991581 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f4b2faa-7ab7-40c8-a28f-d93749011dbe-ovn-controller-tls-certs\") pod \"ovn-controller-ktsg2\" (UID: \"2f4b2faa-7ab7-40c8-a28f-d93749011dbe\") " pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.992626 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88g27\" (UniqueName: \"kubernetes.io/projected/34012c0b-1886-446c-983e-6a1351630186-kube-api-access-88g27\") pod \"ovn-controller-ovs-rh6tb\" (UID: \"34012c0b-1886-446c-983e-6a1351630186\") " pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:05 crc kubenswrapper[4870]: I0216 17:19:05.996235 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwwfv\" (UniqueName: \"kubernetes.io/projected/2f4b2faa-7ab7-40c8-a28f-d93749011dbe-kube-api-access-bwwfv\") pod \"ovn-controller-ktsg2\" (UID: \"2f4b2faa-7ab7-40c8-a28f-d93749011dbe\") " pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:06 crc kubenswrapper[4870]: I0216 17:19:06.001269 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:06 crc kubenswrapper[4870]: I0216 17:19:06.097339 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:06 crc kubenswrapper[4870]: I0216 17:19:06.131800 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.523224 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.525116 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.532180 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.533960 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.534177 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-wbgd4" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.535364 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.537822 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.683625 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.683700 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.683721 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.683772 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.683808 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ab6175a2-16c4-443f-8511-ed5dd7d03910\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab6175a2-16c4-443f-8511-ed5dd7d03910\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.683834 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgqpx\" (UniqueName: \"kubernetes.io/projected/c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe-kube-api-access-xgqpx\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.683848 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe-config\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.683883 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.784930 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.785019 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.785039 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.785086 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.785116 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ab6175a2-16c4-443f-8511-ed5dd7d03910\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab6175a2-16c4-443f-8511-ed5dd7d03910\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.785135 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgqpx\" (UniqueName: \"kubernetes.io/projected/c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe-kube-api-access-xgqpx\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.785153 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe-config\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.785188 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.787888 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.788936 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe-config\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.789193 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.794235 4870 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.794528 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ab6175a2-16c4-443f-8511-ed5dd7d03910\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab6175a2-16c4-443f-8511-ed5dd7d03910\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c8d464da5af603c6b135dfe141283d46e9950c3c2bbff7845d582b0f4551a14d/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.794296 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.794355 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.795744 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.813779 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgqpx\" (UniqueName: \"kubernetes.io/projected/c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe-kube-api-access-xgqpx\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.827314 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ab6175a2-16c4-443f-8511-ed5dd7d03910\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab6175a2-16c4-443f-8511-ed5dd7d03910\") pod \"ovsdbserver-sb-0\" (UID: \"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:10 crc kubenswrapper[4870]: I0216 17:19:10.848836 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.610666 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr"] Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.612489 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.615360 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.615568 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-9m7pm" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.615624 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.615799 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.615899 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.625848 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr"] Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.730296 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69d435d2-948e-44d4-b0c2-8e1db0efb383-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-547gr\" (UID: \"69d435d2-948e-44d4-b0c2-8e1db0efb383\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.730451 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/69d435d2-948e-44d4-b0c2-8e1db0efb383-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-547gr\" (UID: \"69d435d2-948e-44d4-b0c2-8e1db0efb383\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.730544 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69d435d2-948e-44d4-b0c2-8e1db0efb383-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-547gr\" (UID: \"69d435d2-948e-44d4-b0c2-8e1db0efb383\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.730578 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/69d435d2-948e-44d4-b0c2-8e1db0efb383-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-547gr\" (UID: \"69d435d2-948e-44d4-b0c2-8e1db0efb383\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.730618 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdlgh\" (UniqueName: \"kubernetes.io/projected/69d435d2-948e-44d4-b0c2-8e1db0efb383-kube-api-access-jdlgh\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-547gr\" (UID: \"69d435d2-948e-44d4-b0c2-8e1db0efb383\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.774484 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z"] Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.775885 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.777362 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.777810 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.778006 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.785562 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z"] Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.831712 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69d435d2-948e-44d4-b0c2-8e1db0efb383-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-547gr\" (UID: \"69d435d2-948e-44d4-b0c2-8e1db0efb383\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.832083 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/69d435d2-948e-44d4-b0c2-8e1db0efb383-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-547gr\" (UID: \"69d435d2-948e-44d4-b0c2-8e1db0efb383\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.832220 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69d435d2-948e-44d4-b0c2-8e1db0efb383-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-547gr\" (UID: \"69d435d2-948e-44d4-b0c2-8e1db0efb383\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.832299 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/69d435d2-948e-44d4-b0c2-8e1db0efb383-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-547gr\" (UID: \"69d435d2-948e-44d4-b0c2-8e1db0efb383\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.832384 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdlgh\" (UniqueName: \"kubernetes.io/projected/69d435d2-948e-44d4-b0c2-8e1db0efb383-kube-api-access-jdlgh\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-547gr\" (UID: \"69d435d2-948e-44d4-b0c2-8e1db0efb383\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.832937 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69d435d2-948e-44d4-b0c2-8e1db0efb383-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-547gr\" (UID: \"69d435d2-948e-44d4-b0c2-8e1db0efb383\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.832969 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69d435d2-948e-44d4-b0c2-8e1db0efb383-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-547gr\" (UID: \"69d435d2-948e-44d4-b0c2-8e1db0efb383\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.848901 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/69d435d2-948e-44d4-b0c2-8e1db0efb383-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-547gr\" (UID: \"69d435d2-948e-44d4-b0c2-8e1db0efb383\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.848923 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/69d435d2-948e-44d4-b0c2-8e1db0efb383-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-547gr\" (UID: \"69d435d2-948e-44d4-b0c2-8e1db0efb383\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.877232 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdlgh\" (UniqueName: \"kubernetes.io/projected/69d435d2-948e-44d4-b0c2-8e1db0efb383-kube-api-access-jdlgh\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-547gr\" (UID: \"69d435d2-948e-44d4-b0c2-8e1db0efb383\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.877301 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn"] Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.879301 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.884308 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.884686 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.889883 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn"] Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.934774 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-bgr8z\" (UID: \"e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.935075 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-bgr8z\" (UID: \"e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.935103 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-bgr8z\" (UID: \"e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.935143 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-bgr8z\" (UID: \"e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.935159 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-bgr8z\" (UID: \"e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.935175 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t76cp\" (UniqueName: \"kubernetes.io/projected/e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d-kube-api-access-t76cp\") pod \"cloudkitty-lokistack-querier-58c84b5844-bgr8z\" (UID: \"e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.938173 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.985668 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd"] Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.986706 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.992419 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.992654 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.992753 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.992891 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.993473 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Feb 16 17:19:12 crc kubenswrapper[4870]: I0216 17:19:12.993601 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:12.999872 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f"] Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.001161 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.005870 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-xb5xw" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.022544 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd"] Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.036920 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-bgr8z\" (UID: \"e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.037014 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/3e04ba57-8554-4553-a62f-8b6787ba96dd-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn\" (UID: \"3e04ba57-8554-4553-a62f-8b6787ba96dd\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.037074 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e04ba57-8554-4553-a62f-8b6787ba96dd-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn\" (UID: \"3e04ba57-8554-4553-a62f-8b6787ba96dd\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.037108 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6sh9\" (UniqueName: \"kubernetes.io/projected/3e04ba57-8554-4553-a62f-8b6787ba96dd-kube-api-access-t6sh9\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn\" (UID: \"3e04ba57-8554-4553-a62f-8b6787ba96dd\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.037134 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-bgr8z\" (UID: \"e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.037158 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e04ba57-8554-4553-a62f-8b6787ba96dd-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn\" (UID: \"3e04ba57-8554-4553-a62f-8b6787ba96dd\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.037195 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-bgr8z\" (UID: \"e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.037255 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-bgr8z\" (UID: \"e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.037278 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-bgr8z\" (UID: \"e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.037302 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t76cp\" (UniqueName: \"kubernetes.io/projected/e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d-kube-api-access-t76cp\") pod \"cloudkitty-lokistack-querier-58c84b5844-bgr8z\" (UID: \"e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.037370 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/3e04ba57-8554-4553-a62f-8b6787ba96dd-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn\" (UID: \"3e04ba57-8554-4553-a62f-8b6787ba96dd\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.042931 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-bgr8z\" (UID: \"e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.044014 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-bgr8z\" (UID: \"e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.044459 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-bgr8z\" (UID: \"e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.048207 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-bgr8z\" (UID: \"e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.050875 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-bgr8z\" (UID: \"e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.063141 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f"] Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.090700 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t76cp\" (UniqueName: \"kubernetes.io/projected/e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d-kube-api-access-t76cp\") pod \"cloudkitty-lokistack-querier-58c84b5844-bgr8z\" (UID: \"e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.112362 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.138525 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/3e04ba57-8554-4553-a62f-8b6787ba96dd-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn\" (UID: \"3e04ba57-8554-4553-a62f-8b6787ba96dd\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.138568 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g85h7\" (UniqueName: \"kubernetes.io/projected/56c2e555-c8e4-4391-bec8-9b98ed7a830b-kube-api-access-g85h7\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.138592 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.138615 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.138723 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e04ba57-8554-4553-a62f-8b6787ba96dd-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn\" (UID: \"3e04ba57-8554-4553-a62f-8b6787ba96dd\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.138782 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/56c2e555-c8e4-4391-bec8-9b98ed7a830b-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.138814 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6sh9\" (UniqueName: \"kubernetes.io/projected/3e04ba57-8554-4553-a62f-8b6787ba96dd-kube-api-access-t6sh9\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn\" (UID: \"3e04ba57-8554-4553-a62f-8b6787ba96dd\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.138834 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e04ba57-8554-4553-a62f-8b6787ba96dd-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn\" (UID: \"3e04ba57-8554-4553-a62f-8b6787ba96dd\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.138897 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56c2e555-c8e4-4391-bec8-9b98ed7a830b-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.138921 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.139002 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56c2e555-c8e4-4391-bec8-9b98ed7a830b-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.139034 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/56c2e555-c8e4-4391-bec8-9b98ed7a830b-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.139068 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/56c2e555-c8e4-4391-bec8-9b98ed7a830b-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.139103 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/56c2e555-c8e4-4391-bec8-9b98ed7a830b-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.139130 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/56c2e555-c8e4-4391-bec8-9b98ed7a830b-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.139172 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.139191 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56c2e555-c8e4-4391-bec8-9b98ed7a830b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.139235 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/3e04ba57-8554-4553-a62f-8b6787ba96dd-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn\" (UID: \"3e04ba57-8554-4553-a62f-8b6787ba96dd\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.139260 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.139288 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.139321 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxnjv\" (UniqueName: \"kubernetes.io/projected/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-kube-api-access-mxnjv\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.139365 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.139381 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.140272 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e04ba57-8554-4553-a62f-8b6787ba96dd-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn\" (UID: \"3e04ba57-8554-4553-a62f-8b6787ba96dd\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.140676 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e04ba57-8554-4553-a62f-8b6787ba96dd-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn\" (UID: \"3e04ba57-8554-4553-a62f-8b6787ba96dd\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.144561 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/3e04ba57-8554-4553-a62f-8b6787ba96dd-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn\" (UID: \"3e04ba57-8554-4553-a62f-8b6787ba96dd\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.144588 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/3e04ba57-8554-4553-a62f-8b6787ba96dd-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn\" (UID: \"3e04ba57-8554-4553-a62f-8b6787ba96dd\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.157086 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6sh9\" (UniqueName: \"kubernetes.io/projected/3e04ba57-8554-4553-a62f-8b6787ba96dd-kube-api-access-t6sh9\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn\" (UID: \"3e04ba57-8554-4553-a62f-8b6787ba96dd\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.233173 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.241130 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56c2e555-c8e4-4391-bec8-9b98ed7a830b-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.241194 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.241248 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56c2e555-c8e4-4391-bec8-9b98ed7a830b-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.241284 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/56c2e555-c8e4-4391-bec8-9b98ed7a830b-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.241319 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/56c2e555-c8e4-4391-bec8-9b98ed7a830b-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.241349 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/56c2e555-c8e4-4391-bec8-9b98ed7a830b-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.241378 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/56c2e555-c8e4-4391-bec8-9b98ed7a830b-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.241413 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.241439 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56c2e555-c8e4-4391-bec8-9b98ed7a830b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.241479 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.241526 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.241562 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxnjv\" (UniqueName: \"kubernetes.io/projected/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-kube-api-access-mxnjv\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.241601 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.241628 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.241668 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g85h7\" (UniqueName: \"kubernetes.io/projected/56c2e555-c8e4-4391-bec8-9b98ed7a830b-kube-api-access-g85h7\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.241697 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.241729 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.241771 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/56c2e555-c8e4-4391-bec8-9b98ed7a830b-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.242513 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56c2e555-c8e4-4391-bec8-9b98ed7a830b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.242774 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.242860 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.243533 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.243641 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56c2e555-c8e4-4391-bec8-9b98ed7a830b-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.243665 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.244253 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56c2e555-c8e4-4391-bec8-9b98ed7a830b-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.244461 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/56c2e555-c8e4-4391-bec8-9b98ed7a830b-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.245026 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.246695 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/56c2e555-c8e4-4391-bec8-9b98ed7a830b-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.247267 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/56c2e555-c8e4-4391-bec8-9b98ed7a830b-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.249468 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.249534 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/56c2e555-c8e4-4391-bec8-9b98ed7a830b-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.249822 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.250804 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.253390 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/56c2e555-c8e4-4391-bec8-9b98ed7a830b-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.259842 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g85h7\" (UniqueName: \"kubernetes.io/projected/56c2e555-c8e4-4391-bec8-9b98ed7a830b-kube-api-access-g85h7\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-gdknd\" (UID: \"56c2e555-c8e4-4391-bec8-9b98ed7a830b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.266691 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxnjv\" (UniqueName: \"kubernetes.io/projected/3b99d5cf-946f-4e7f-980d-1e6bf6aec95e-kube-api-access-mxnjv\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-nhw2f\" (UID: \"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.329686 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.437832 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.757114 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.758527 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.761489 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.761766 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.765615 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.831472 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.833019 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.836114 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.840507 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.846097 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.864421 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.864512 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d158e8d5-206e-4289-a1e5-247fddf29a11-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.864540 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/d158e8d5-206e-4289-a1e5-247fddf29a11-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.864585 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/d158e8d5-206e-4289-a1e5-247fddf29a11-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.864628 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d158e8d5-206e-4289-a1e5-247fddf29a11-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.864671 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/d158e8d5-206e-4289-a1e5-247fddf29a11-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.864716 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.864752 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq54l\" (UniqueName: \"kubernetes.io/projected/d158e8d5-206e-4289-a1e5-247fddf29a11-kube-api-access-wq54l\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.937051 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.938218 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.941967 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.941972 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.951456 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.966395 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq54l\" (UniqueName: \"kubernetes.io/projected/d158e8d5-206e-4289-a1e5-247fddf29a11-kube-api-access-wq54l\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.966441 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.966499 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d158e8d5-206e-4289-a1e5-247fddf29a11-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.966528 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/d158e8d5-206e-4289-a1e5-247fddf29a11-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.966561 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.966587 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/d158e8d5-206e-4289-a1e5-247fddf29a11-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.966617 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d158e8d5-206e-4289-a1e5-247fddf29a11-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.966639 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzk8q\" (UniqueName: \"kubernetes.io/projected/ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24-kube-api-access-xzk8q\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.966673 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.966695 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/d158e8d5-206e-4289-a1e5-247fddf29a11-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.966719 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.966738 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.966756 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.966770 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.966791 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.966908 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.967563 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.967686 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d158e8d5-206e-4289-a1e5-247fddf29a11-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.969234 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d158e8d5-206e-4289-a1e5-247fddf29a11-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.981574 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/d158e8d5-206e-4289-a1e5-247fddf29a11-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.981835 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/d158e8d5-206e-4289-a1e5-247fddf29a11-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.981886 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/d158e8d5-206e-4289-a1e5-247fddf29a11-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.985367 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq54l\" (UniqueName: \"kubernetes.io/projected/d158e8d5-206e-4289-a1e5-247fddf29a11-kube-api-access-wq54l\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.996662 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:13 crc kubenswrapper[4870]: I0216 17:19:13.998656 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"d158e8d5-206e-4289-a1e5-247fddf29a11\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.068740 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzk8q\" (UniqueName: \"kubernetes.io/projected/ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24-kube-api-access-xzk8q\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.068799 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tc9n\" (UniqueName: \"kubernetes.io/projected/c7e13f68-6de2-4cf5-b655-77e0c2141ea1-kube-api-access-9tc9n\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.068828 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.068903 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.068938 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.068972 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.068990 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/c7e13f68-6de2-4cf5-b655-77e0c2141ea1-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.069010 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.069071 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7e13f68-6de2-4cf5-b655-77e0c2141ea1-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.069098 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.069122 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.069151 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c7e13f68-6de2-4cf5-b655-77e0c2141ea1-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.069180 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/c7e13f68-6de2-4cf5-b655-77e0c2141ea1-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.069196 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7e13f68-6de2-4cf5-b655-77e0c2141ea1-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.072377 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.072424 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.072507 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.072827 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.073573 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.077275 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.092877 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.094677 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzk8q\" (UniqueName: \"kubernetes.io/projected/ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24-kube-api-access-xzk8q\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.106529 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.170456 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/c7e13f68-6de2-4cf5-b655-77e0c2141ea1-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.170574 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7e13f68-6de2-4cf5-b655-77e0c2141ea1-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.170612 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.170653 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c7e13f68-6de2-4cf5-b655-77e0c2141ea1-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.170681 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/c7e13f68-6de2-4cf5-b655-77e0c2141ea1-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.170697 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7e13f68-6de2-4cf5-b655-77e0c2141ea1-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.170727 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tc9n\" (UniqueName: \"kubernetes.io/projected/c7e13f68-6de2-4cf5-b655-77e0c2141ea1-kube-api-access-9tc9n\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.171814 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.172504 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7e13f68-6de2-4cf5-b655-77e0c2141ea1-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.172753 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7e13f68-6de2-4cf5-b655-77e0c2141ea1-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.175600 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/c7e13f68-6de2-4cf5-b655-77e0c2141ea1-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.177866 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c7e13f68-6de2-4cf5-b655-77e0c2141ea1-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.180508 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/c7e13f68-6de2-4cf5-b655-77e0c2141ea1-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.187403 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tc9n\" (UniqueName: \"kubernetes.io/projected/c7e13f68-6de2-4cf5-b655-77e0c2141ea1-kube-api-access-9tc9n\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.196699 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c7e13f68-6de2-4cf5-b655-77e0c2141ea1\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.215430 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.258751 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:14 crc kubenswrapper[4870]: I0216 17:19:14.580964 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 16 17:19:16 crc kubenswrapper[4870]: E0216 17:19:16.651250 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 17:19:16 crc kubenswrapper[4870]: E0216 17:19:16.651718 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c84kn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-bptzj_openstack(d8ed11f9-9a98-4e02-923f-91dc562a8886): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:19:16 crc kubenswrapper[4870]: E0216 17:19:16.653163 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-bptzj" podUID="d8ed11f9-9a98-4e02-923f-91dc562a8886" Feb 16 17:19:16 crc kubenswrapper[4870]: E0216 17:19:16.684173 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 17:19:16 crc kubenswrapper[4870]: E0216 17:19:16.688413 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9h8cs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-m46vp_openstack(3d35ff66-a700-46ba-9f68-728f2c0c1aa9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:19:16 crc kubenswrapper[4870]: E0216 17:19:16.693797 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-m46vp" podUID="3d35ff66-a700-46ba-9f68-728f2c0c1aa9" Feb 16 17:19:17 crc kubenswrapper[4870]: I0216 17:19:17.030813 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 17:19:18 crc kubenswrapper[4870]: W0216 17:19:18.484373 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79e9de5e_117f_4d5e_bfee_bad481a8c0b8.slice/crio-2813ea747a57af5cf871c7ba02600dc2927e27da42fe2d69ee4d109e7bb5b66e WatchSource:0}: Error finding container 2813ea747a57af5cf871c7ba02600dc2927e27da42fe2d69ee4d109e7bb5b66e: Status 404 returned error can't find the container with id 2813ea747a57af5cf871c7ba02600dc2927e27da42fe2d69ee4d109e7bb5b66e Feb 16 17:19:18 crc kubenswrapper[4870]: I0216 17:19:18.640486 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-m46vp" Feb 16 17:19:18 crc kubenswrapper[4870]: I0216 17:19:18.655280 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bptzj" Feb 16 17:19:18 crc kubenswrapper[4870]: I0216 17:19:18.746646 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ed11f9-9a98-4e02-923f-91dc562a8886-config\") pod \"d8ed11f9-9a98-4e02-923f-91dc562a8886\" (UID: \"d8ed11f9-9a98-4e02-923f-91dc562a8886\") " Feb 16 17:19:18 crc kubenswrapper[4870]: I0216 17:19:18.747079 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d35ff66-a700-46ba-9f68-728f2c0c1aa9-dns-svc\") pod \"3d35ff66-a700-46ba-9f68-728f2c0c1aa9\" (UID: \"3d35ff66-a700-46ba-9f68-728f2c0c1aa9\") " Feb 16 17:19:18 crc kubenswrapper[4870]: I0216 17:19:18.747108 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h8cs\" (UniqueName: \"kubernetes.io/projected/3d35ff66-a700-46ba-9f68-728f2c0c1aa9-kube-api-access-9h8cs\") pod \"3d35ff66-a700-46ba-9f68-728f2c0c1aa9\" (UID: \"3d35ff66-a700-46ba-9f68-728f2c0c1aa9\") " Feb 16 17:19:18 crc kubenswrapper[4870]: I0216 17:19:18.747302 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c84kn\" (UniqueName: \"kubernetes.io/projected/d8ed11f9-9a98-4e02-923f-91dc562a8886-kube-api-access-c84kn\") pod \"d8ed11f9-9a98-4e02-923f-91dc562a8886\" (UID: \"d8ed11f9-9a98-4e02-923f-91dc562a8886\") " Feb 16 17:19:18 crc kubenswrapper[4870]: I0216 17:19:18.747322 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d35ff66-a700-46ba-9f68-728f2c0c1aa9-config\") pod \"3d35ff66-a700-46ba-9f68-728f2c0c1aa9\" (UID: \"3d35ff66-a700-46ba-9f68-728f2c0c1aa9\") " Feb 16 17:19:18 crc kubenswrapper[4870]: I0216 17:19:18.747873 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d35ff66-a700-46ba-9f68-728f2c0c1aa9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3d35ff66-a700-46ba-9f68-728f2c0c1aa9" (UID: "3d35ff66-a700-46ba-9f68-728f2c0c1aa9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:19:18 crc kubenswrapper[4870]: I0216 17:19:18.747983 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8ed11f9-9a98-4e02-923f-91dc562a8886-config" (OuterVolumeSpecName: "config") pod "d8ed11f9-9a98-4e02-923f-91dc562a8886" (UID: "d8ed11f9-9a98-4e02-923f-91dc562a8886"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:19:18 crc kubenswrapper[4870]: I0216 17:19:18.747993 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d35ff66-a700-46ba-9f68-728f2c0c1aa9-config" (OuterVolumeSpecName: "config") pod "3d35ff66-a700-46ba-9f68-728f2c0c1aa9" (UID: "3d35ff66-a700-46ba-9f68-728f2c0c1aa9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:19:18 crc kubenswrapper[4870]: I0216 17:19:18.753373 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ed11f9-9a98-4e02-923f-91dc562a8886-kube-api-access-c84kn" (OuterVolumeSpecName: "kube-api-access-c84kn") pod "d8ed11f9-9a98-4e02-923f-91dc562a8886" (UID: "d8ed11f9-9a98-4e02-923f-91dc562a8886"). InnerVolumeSpecName "kube-api-access-c84kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:19:18 crc kubenswrapper[4870]: I0216 17:19:18.753476 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d35ff66-a700-46ba-9f68-728f2c0c1aa9-kube-api-access-9h8cs" (OuterVolumeSpecName: "kube-api-access-9h8cs") pod "3d35ff66-a700-46ba-9f68-728f2c0c1aa9" (UID: "3d35ff66-a700-46ba-9f68-728f2c0c1aa9"). InnerVolumeSpecName "kube-api-access-9h8cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:19:18 crc kubenswrapper[4870]: I0216 17:19:18.849713 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ed11f9-9a98-4e02-923f-91dc562a8886-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:18 crc kubenswrapper[4870]: I0216 17:19:18.849744 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d35ff66-a700-46ba-9f68-728f2c0c1aa9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:18 crc kubenswrapper[4870]: I0216 17:19:18.849757 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h8cs\" (UniqueName: \"kubernetes.io/projected/3d35ff66-a700-46ba-9f68-728f2c0c1aa9-kube-api-access-9h8cs\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:18 crc kubenswrapper[4870]: I0216 17:19:18.849772 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c84kn\" (UniqueName: \"kubernetes.io/projected/d8ed11f9-9a98-4e02-923f-91dc562a8886-kube-api-access-c84kn\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:18 crc kubenswrapper[4870]: I0216 17:19:18.849789 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d35ff66-a700-46ba-9f68-728f2c0c1aa9-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.435655 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-m46vp" event={"ID":"3d35ff66-a700-46ba-9f68-728f2c0c1aa9","Type":"ContainerDied","Data":"7be663a1f9c83c0bbd150f726365800c6c13b36c78fde78f4f8e2e39aef3599e"} Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.435718 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-m46vp" Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.437348 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bptzj" event={"ID":"d8ed11f9-9a98-4e02-923f-91dc562a8886","Type":"ContainerDied","Data":"c7062692afdf35c620e41898ece245585b4585fa1ae732de8b9b7afbb45f64fc"} Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.437495 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bptzj" Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.440671 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a6723230-3e6b-43cc-bda7-2aac2faa0e67","Type":"ContainerStarted","Data":"7a7778883bb3c81ae7d08d86664fda96c8e4dd9934778897a0764d7d263f3b94"} Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.447966 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"79e9de5e-117f-4d5e-bfee-bad481a8c0b8","Type":"ContainerStarted","Data":"2813ea747a57af5cf871c7ba02600dc2927e27da42fe2d69ee4d109e7bb5b66e"} Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.452469 4870 generic.go:334] "Generic (PLEG): container finished" podID="4882424e-0156-4f5d-b6fb-9f7a54d52ded" containerID="2a8bf46d2ba8cbfb314941ae60e36729f80b774d1f599851dbcef6b771985dd1" exitCode=0 Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.452530 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" event={"ID":"4882424e-0156-4f5d-b6fb-9f7a54d52ded","Type":"ContainerDied","Data":"2a8bf46d2ba8cbfb314941ae60e36729f80b774d1f599851dbcef6b771985dd1"} Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.458433 4870 generic.go:334] "Generic (PLEG): container finished" podID="743bd071-f9bd-4948-b99b-cd3e29bfe49e" containerID="bbd7d2f61ec90169be154d4babaade2aa032ea16deff8eecb73e82a7a0b1be08" exitCode=0 Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.458547 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" event={"ID":"743bd071-f9bd-4948-b99b-cd3e29bfe49e","Type":"ContainerDied","Data":"bbd7d2f61ec90169be154d4babaade2aa032ea16deff8eecb73e82a7a0b1be08"} Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.469811 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"523f77f6-c829-4d3d-99c1-45bafcb30ee3","Type":"ContainerStarted","Data":"43bfeefee6f1e3c831ed4e4534d7ea9e780830016fa4bc01a91c9ec7f2d0487b"} Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.539864 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.570122 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.672528 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bptzj"] Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.686171 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bptzj"] Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.700352 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-m46vp"] Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.707710 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-m46vp"] Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.788193 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f"] Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.801218 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd"] Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.811739 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn"] Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.823632 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ktsg2"] Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.833429 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z"] Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.847969 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.856815 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr"] Feb 16 17:19:19 crc kubenswrapper[4870]: W0216 17:19:19.910973 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e04ba57_8554_4553_a62f_8b6787ba96dd.slice/crio-a34122a3873e4e3169a14fddfc51e6b5a75aac4e02d82768d7c405c38fa0d290 WatchSource:0}: Error finding container a34122a3873e4e3169a14fddfc51e6b5a75aac4e02d82768d7c405c38fa0d290: Status 404 returned error can't find the container with id a34122a3873e4e3169a14fddfc51e6b5a75aac4e02d82768d7c405c38fa0d290 Feb 16 17:19:19 crc kubenswrapper[4870]: I0216 17:19:19.914806 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 17:19:19 crc kubenswrapper[4870]: W0216 17:19:19.919585 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b99d5cf_946f_4e7f_980d_1e6bf6aec95e.slice/crio-e99db44488c4026a88f423162b55141dd5175171095e7972068a540140c97072 WatchSource:0}: Error finding container e99db44488c4026a88f423162b55141dd5175171095e7972068a540140c97072: Status 404 returned error can't find the container with id e99db44488c4026a88f423162b55141dd5175171095e7972068a540140c97072 Feb 16 17:19:19 crc kubenswrapper[4870]: W0216 17:19:19.964141 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6be0bc8f_cbbf_4f5e_98c4_4b46ffa0041d.slice/crio-3dfec06d4a79ae3aa874d1f17c8072b2e87138e7ebfd97e0792d640ad8f078e9 WatchSource:0}: Error finding container 3dfec06d4a79ae3aa874d1f17c8072b2e87138e7ebfd97e0792d640ad8f078e9: Status 404 returned error can't find the container with id 3dfec06d4a79ae3aa874d1f17c8072b2e87138e7ebfd97e0792d640ad8f078e9 Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.138041 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.165971 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.183134 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 16 17:19:20 crc kubenswrapper[4870]: W0216 17:19:20.187407 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca54e9a0_3f4f_4f0b_96cb_56ecd8015d24.slice/crio-c1e680b650d9e480cd0979fcedc8dae6f3752a69a393162032ee0f8d49ad3e6f WatchSource:0}: Error finding container c1e680b650d9e480cd0979fcedc8dae6f3752a69a393162032ee0f8d49ad3e6f: Status 404 returned error can't find the container with id c1e680b650d9e480cd0979fcedc8dae6f3752a69a393162032ee0f8d49ad3e6f Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.221146 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.235836 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d35ff66-a700-46ba-9f68-728f2c0c1aa9" path="/var/lib/kubelet/pods/3d35ff66-a700-46ba-9f68-728f2c0c1aa9/volumes" Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.236437 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8ed11f9-9a98-4e02-923f-91dc562a8886" path="/var/lib/kubelet/pods/d8ed11f9-9a98-4e02-923f-91dc562a8886/volumes" Feb 16 17:19:20 crc kubenswrapper[4870]: W0216 17:19:20.390400 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4e1ed4d_c5c8_4497_960a_0035c3fc3fbe.slice/crio-3b501270f281605e71c41ea1236151b498b093780adfaacb974e229fbc9f4c97 WatchSource:0}: Error finding container 3b501270f281605e71c41ea1236151b498b093780adfaacb974e229fbc9f4c97: Status 404 returned error can't find the container with id 3b501270f281605e71c41ea1236151b498b093780adfaacb974e229fbc9f4c97 Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.478573 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d027dcfc-cbb1-4c78-b55f-0ed148b1faad","Type":"ContainerStarted","Data":"149a313529f1889f323e8d01ea09a58bd9c3ca4118908686140007106438a56b"} Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.486585 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24","Type":"ContainerStarted","Data":"c1e680b650d9e480cd0979fcedc8dae6f3752a69a393162032ee0f8d49ad3e6f"} Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.501152 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" event={"ID":"4882424e-0156-4f5d-b6fb-9f7a54d52ded","Type":"ContainerStarted","Data":"b4bdaf5f97b2c1441d8dfbdc84f91ea6a1eb33acb677470d3a443d8125cfd7ba"} Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.502472 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.504198 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"086322f7-5554-4a10-a1be-10622174e27f","Type":"ContainerStarted","Data":"c6c9c80718dba9c43aab8c20e2da28eb608125c186d5b1e036d2df832d18c2ab"} Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.531135 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" event={"ID":"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e","Type":"ContainerStarted","Data":"e99db44488c4026a88f423162b55141dd5175171095e7972068a540140c97072"} Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.535888 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" podStartSLOduration=3.175506048 podStartE2EDuration="25.535867813s" podCreationTimestamp="2026-02-16 17:18:55 +0000 UTC" firstStartedPulling="2026-02-16 17:18:56.412643292 +0000 UTC m=+1140.896107666" lastFinishedPulling="2026-02-16 17:19:18.773005047 +0000 UTC m=+1163.256469431" observedRunningTime="2026-02-16 17:19:20.535232005 +0000 UTC m=+1165.018696409" watchObservedRunningTime="2026-02-16 17:19:20.535867813 +0000 UTC m=+1165.019332207" Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.538168 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" event={"ID":"3e04ba57-8554-4553-a62f-8b6787ba96dd","Type":"ContainerStarted","Data":"a34122a3873e4e3169a14fddfc51e6b5a75aac4e02d82768d7c405c38fa0d290"} Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.539804 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"d158e8d5-206e-4289-a1e5-247fddf29a11","Type":"ContainerStarted","Data":"0937da95193fe8f30a6081c777d1a1e2b48a74f24a06e6e96e8c203f6cf0cb87"} Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.540844 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ktsg2" event={"ID":"2f4b2faa-7ab7-40c8-a28f-d93749011dbe","Type":"ContainerStarted","Data":"c491ef70d647b3b2d2355aee86a0c9855d1d95721f8212c5b522839201173de5"} Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.544469 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" event={"ID":"743bd071-f9bd-4948-b99b-cd3e29bfe49e","Type":"ContainerStarted","Data":"536345f5604d70bac97e4bcd5628d34a588660ad504e6695e8f1ae9a70143cf8"} Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.545583 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.546461 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"c7e13f68-6de2-4cf5-b655-77e0c2141ea1","Type":"ContainerStarted","Data":"4e8f751c16cb8b6e62db71d968a2559bad5c7211d622dc266c12f773d59364a5"} Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.547536 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe","Type":"ContainerStarted","Data":"3b501270f281605e71c41ea1236151b498b093780adfaacb974e229fbc9f4c97"} Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.549672 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" event={"ID":"56c2e555-c8e4-4391-bec8-9b98ed7a830b","Type":"ContainerStarted","Data":"e21d211d879f5cfc11df620dd6329b49d9b30feca278076514501b6409828589"} Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.564094 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" podStartSLOduration=3.2974903810000002 podStartE2EDuration="25.564080237s" podCreationTimestamp="2026-02-16 17:18:55 +0000 UTC" firstStartedPulling="2026-02-16 17:18:56.678673816 +0000 UTC m=+1141.162138200" lastFinishedPulling="2026-02-16 17:19:18.945263672 +0000 UTC m=+1163.428728056" observedRunningTime="2026-02-16 17:19:20.562815851 +0000 UTC m=+1165.046280255" watchObservedRunningTime="2026-02-16 17:19:20.564080237 +0000 UTC m=+1165.047544621" Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.566711 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" event={"ID":"e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d","Type":"ContainerStarted","Data":"4c20ce694a9cb68d1e893c02ee2898ee1acbbe431bec7d4a58706d132cca4878"} Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.570696 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" event={"ID":"69d435d2-948e-44d4-b0c2-8e1db0efb383","Type":"ContainerStarted","Data":"a58fe76fd3d09c872b6689802c6b2e83ad9dd1c2c82222a0a001e008786267b4"} Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.571993 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"858288c8-7418-43d3-ae1c-7974c170239d","Type":"ContainerStarted","Data":"4fbc6b8f4b8f2c3825f492d57996565e8335f5817e84d3af824ad78a2b5180da"} Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.574215 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1c107984-3d0e-4627-98a9-0830571e42fa","Type":"ContainerStarted","Data":"c7e95703240465769d62692aa861b60810cee994b7f187ee32bc5ada53fe3787"} Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.574258 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1c107984-3d0e-4627-98a9-0830571e42fa","Type":"ContainerStarted","Data":"4b5cdd02135888bbb8ee439fc5972463f875127db3fdc37aa577d931a1168125"} Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.575297 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d","Type":"ContainerStarted","Data":"3dfec06d4a79ae3aa874d1f17c8072b2e87138e7ebfd97e0792d640ad8f078e9"} Feb 16 17:19:20 crc kubenswrapper[4870]: I0216 17:19:20.634464 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rh6tb"] Feb 16 17:19:21 crc kubenswrapper[4870]: I0216 17:19:21.587964 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"66aba020-76f1-4cf7-992b-0745bd3c3512","Type":"ContainerStarted","Data":"2a0aeb2e000109dda624a19f547d11b30ddb6a33113ab5239ab656d4cc73f800"} Feb 16 17:19:22 crc kubenswrapper[4870]: I0216 17:19:22.608412 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rh6tb" event={"ID":"34012c0b-1886-446c-983e-6a1351630186","Type":"ContainerStarted","Data":"f56e3b7769999baa55d9b65ad05e7a985e1b8201d9273b218efaa0cc497ab9b4"} Feb 16 17:19:23 crc kubenswrapper[4870]: I0216 17:19:23.619234 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" event={"ID":"3b99d5cf-946f-4e7f-980d-1e6bf6aec95e","Type":"ContainerStarted","Data":"3312e3bdd99cac0f0c917610ff520a920b7f7b021fc7866836f6b29d10cebd08"} Feb 16 17:19:23 crc kubenswrapper[4870]: I0216 17:19:23.619594 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:23 crc kubenswrapper[4870]: I0216 17:19:23.623536 4870 generic.go:334] "Generic (PLEG): container finished" podID="a6723230-3e6b-43cc-bda7-2aac2faa0e67" containerID="7a7778883bb3c81ae7d08d86664fda96c8e4dd9934778897a0764d7d263f3b94" exitCode=0 Feb 16 17:19:23 crc kubenswrapper[4870]: I0216 17:19:23.623588 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a6723230-3e6b-43cc-bda7-2aac2faa0e67","Type":"ContainerDied","Data":"7a7778883bb3c81ae7d08d86664fda96c8e4dd9934778897a0764d7d263f3b94"} Feb 16 17:19:23 crc kubenswrapper[4870]: I0216 17:19:23.625613 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"858288c8-7418-43d3-ae1c-7974c170239d","Type":"ContainerStarted","Data":"c95be93fad249c02137179e629ced881a90e81b955a522fd0b59bd1e844d9562"} Feb 16 17:19:23 crc kubenswrapper[4870]: I0216 17:19:23.625749 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 16 17:19:23 crc kubenswrapper[4870]: I0216 17:19:23.630072 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" Feb 16 17:19:23 crc kubenswrapper[4870]: I0216 17:19:23.643446 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-nhw2f" podStartSLOduration=8.717971896 podStartE2EDuration="11.643425622s" podCreationTimestamp="2026-02-16 17:19:12 +0000 UTC" firstStartedPulling="2026-02-16 17:19:19.943179508 +0000 UTC m=+1164.426643892" lastFinishedPulling="2026-02-16 17:19:22.868633234 +0000 UTC m=+1167.352097618" observedRunningTime="2026-02-16 17:19:23.636629261 +0000 UTC m=+1168.120093645" watchObservedRunningTime="2026-02-16 17:19:23.643425622 +0000 UTC m=+1168.126890006" Feb 16 17:19:23 crc kubenswrapper[4870]: I0216 17:19:23.701096 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=21.534424623 podStartE2EDuration="24.701069104s" podCreationTimestamp="2026-02-16 17:18:59 +0000 UTC" firstStartedPulling="2026-02-16 17:19:19.546425576 +0000 UTC m=+1164.029889960" lastFinishedPulling="2026-02-16 17:19:22.713070057 +0000 UTC m=+1167.196534441" observedRunningTime="2026-02-16 17:19:23.674179068 +0000 UTC m=+1168.157643452" watchObservedRunningTime="2026-02-16 17:19:23.701069104 +0000 UTC m=+1168.184533488" Feb 16 17:19:24 crc kubenswrapper[4870]: I0216 17:19:24.639074 4870 generic.go:334] "Generic (PLEG): container finished" podID="1c107984-3d0e-4627-98a9-0830571e42fa" containerID="c7e95703240465769d62692aa861b60810cee994b7f187ee32bc5ada53fe3787" exitCode=0 Feb 16 17:19:24 crc kubenswrapper[4870]: I0216 17:19:24.639152 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1c107984-3d0e-4627-98a9-0830571e42fa","Type":"ContainerDied","Data":"c7e95703240465769d62692aa861b60810cee994b7f187ee32bc5ada53fe3787"} Feb 16 17:19:25 crc kubenswrapper[4870]: I0216 17:19:25.648166 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"523f77f6-c829-4d3d-99c1-45bafcb30ee3","Type":"ContainerStarted","Data":"3f00fdf72c01fcfa772f386ad93e343124a9ed164f27f4e1851ac3ab6b7344e6"} Feb 16 17:19:25 crc kubenswrapper[4870]: I0216 17:19:25.903087 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" Feb 16 17:19:26 crc kubenswrapper[4870]: I0216 17:19:26.209110 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" Feb 16 17:19:26 crc kubenswrapper[4870]: I0216 17:19:26.328841 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zd6zl"] Feb 16 17:19:26 crc kubenswrapper[4870]: I0216 17:19:26.661138 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"79e9de5e-117f-4d5e-bfee-bad481a8c0b8","Type":"ContainerStarted","Data":"921ebdc2ff4a3aaf9a00477ffa64be6180cd53f7d4033bc11eda12fa7cf9c4eb"} Feb 16 17:19:26 crc kubenswrapper[4870]: I0216 17:19:26.661273 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" podUID="4882424e-0156-4f5d-b6fb-9f7a54d52ded" containerName="dnsmasq-dns" containerID="cri-o://b4bdaf5f97b2c1441d8dfbdc84f91ea6a1eb33acb677470d3a443d8125cfd7ba" gracePeriod=10 Feb 16 17:19:27 crc kubenswrapper[4870]: I0216 17:19:27.672259 4870 generic.go:334] "Generic (PLEG): container finished" podID="4882424e-0156-4f5d-b6fb-9f7a54d52ded" containerID="b4bdaf5f97b2c1441d8dfbdc84f91ea6a1eb33acb677470d3a443d8125cfd7ba" exitCode=0 Feb 16 17:19:27 crc kubenswrapper[4870]: I0216 17:19:27.672346 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" event={"ID":"4882424e-0156-4f5d-b6fb-9f7a54d52ded","Type":"ContainerDied","Data":"b4bdaf5f97b2c1441d8dfbdc84f91ea6a1eb33acb677470d3a443d8125cfd7ba"} Feb 16 17:19:28 crc kubenswrapper[4870]: I0216 17:19:28.189132 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" Feb 16 17:19:28 crc kubenswrapper[4870]: I0216 17:19:28.270601 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4882424e-0156-4f5d-b6fb-9f7a54d52ded-config\") pod \"4882424e-0156-4f5d-b6fb-9f7a54d52ded\" (UID: \"4882424e-0156-4f5d-b6fb-9f7a54d52ded\") " Feb 16 17:19:28 crc kubenswrapper[4870]: I0216 17:19:28.270651 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtrlj\" (UniqueName: \"kubernetes.io/projected/4882424e-0156-4f5d-b6fb-9f7a54d52ded-kube-api-access-qtrlj\") pod \"4882424e-0156-4f5d-b6fb-9f7a54d52ded\" (UID: \"4882424e-0156-4f5d-b6fb-9f7a54d52ded\") " Feb 16 17:19:28 crc kubenswrapper[4870]: I0216 17:19:28.270799 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4882424e-0156-4f5d-b6fb-9f7a54d52ded-dns-svc\") pod \"4882424e-0156-4f5d-b6fb-9f7a54d52ded\" (UID: \"4882424e-0156-4f5d-b6fb-9f7a54d52ded\") " Feb 16 17:19:28 crc kubenswrapper[4870]: I0216 17:19:28.277097 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4882424e-0156-4f5d-b6fb-9f7a54d52ded-kube-api-access-qtrlj" (OuterVolumeSpecName: "kube-api-access-qtrlj") pod "4882424e-0156-4f5d-b6fb-9f7a54d52ded" (UID: "4882424e-0156-4f5d-b6fb-9f7a54d52ded"). InnerVolumeSpecName "kube-api-access-qtrlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:19:28 crc kubenswrapper[4870]: I0216 17:19:28.308004 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4882424e-0156-4f5d-b6fb-9f7a54d52ded-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4882424e-0156-4f5d-b6fb-9f7a54d52ded" (UID: "4882424e-0156-4f5d-b6fb-9f7a54d52ded"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:19:28 crc kubenswrapper[4870]: I0216 17:19:28.317663 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4882424e-0156-4f5d-b6fb-9f7a54d52ded-config" (OuterVolumeSpecName: "config") pod "4882424e-0156-4f5d-b6fb-9f7a54d52ded" (UID: "4882424e-0156-4f5d-b6fb-9f7a54d52ded"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:19:28 crc kubenswrapper[4870]: I0216 17:19:28.375067 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4882424e-0156-4f5d-b6fb-9f7a54d52ded-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:28 crc kubenswrapper[4870]: I0216 17:19:28.375096 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtrlj\" (UniqueName: \"kubernetes.io/projected/4882424e-0156-4f5d-b6fb-9f7a54d52ded-kube-api-access-qtrlj\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:28 crc kubenswrapper[4870]: I0216 17:19:28.375108 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4882424e-0156-4f5d-b6fb-9f7a54d52ded-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:28 crc kubenswrapper[4870]: I0216 17:19:28.683452 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" event={"ID":"4882424e-0156-4f5d-b6fb-9f7a54d52ded","Type":"ContainerDied","Data":"ae2a745b10d9e2becc43684d5a320746122100c2d18f54ce7112c30b893ea8f5"} Feb 16 17:19:28 crc kubenswrapper[4870]: I0216 17:19:28.683758 4870 scope.go:117] "RemoveContainer" containerID="b4bdaf5f97b2c1441d8dfbdc84f91ea6a1eb33acb677470d3a443d8125cfd7ba" Feb 16 17:19:28 crc kubenswrapper[4870]: I0216 17:19:28.683545 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zd6zl" Feb 16 17:19:28 crc kubenswrapper[4870]: I0216 17:19:28.690260 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" event={"ID":"56c2e555-c8e4-4391-bec8-9b98ed7a830b","Type":"ContainerStarted","Data":"591e77746f57b3ba1a1497a7f10be17ae0c7fb5a04e48e966f126a81f12a4826"} Feb 16 17:19:28 crc kubenswrapper[4870]: I0216 17:19:28.690637 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:28 crc kubenswrapper[4870]: I0216 17:19:28.695936 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a6723230-3e6b-43cc-bda7-2aac2faa0e67","Type":"ContainerStarted","Data":"c71b3ee6e3211a9d75a6943af8274fde7342e53700f414cdfe36285e5464ffde"} Feb 16 17:19:28 crc kubenswrapper[4870]: I0216 17:19:28.710920 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" Feb 16 17:19:28 crc kubenswrapper[4870]: I0216 17:19:28.713236 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-gdknd" podStartSLOduration=13.753224651 podStartE2EDuration="16.713216888s" podCreationTimestamp="2026-02-16 17:19:12 +0000 UTC" firstStartedPulling="2026-02-16 17:19:19.937453067 +0000 UTC m=+1164.420917451" lastFinishedPulling="2026-02-16 17:19:22.897445314 +0000 UTC m=+1167.380909688" observedRunningTime="2026-02-16 17:19:28.711435438 +0000 UTC m=+1173.194899832" watchObservedRunningTime="2026-02-16 17:19:28.713216888 +0000 UTC m=+1173.196681282" Feb 16 17:19:28 crc kubenswrapper[4870]: I0216 17:19:28.753101 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=11.891779134 podStartE2EDuration="31.75307961s" podCreationTimestamp="2026-02-16 17:18:57 +0000 UTC" firstStartedPulling="2026-02-16 17:18:58.933523235 +0000 UTC m=+1143.416987609" lastFinishedPulling="2026-02-16 17:19:18.794823711 +0000 UTC m=+1163.278288085" observedRunningTime="2026-02-16 17:19:28.731481002 +0000 UTC m=+1173.214945386" watchObservedRunningTime="2026-02-16 17:19:28.75307961 +0000 UTC m=+1173.236543994" Feb 16 17:19:28 crc kubenswrapper[4870]: I0216 17:19:28.783843 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zd6zl"] Feb 16 17:19:28 crc kubenswrapper[4870]: I0216 17:19:28.805136 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zd6zl"] Feb 16 17:19:29 crc kubenswrapper[4870]: I0216 17:19:29.211700 4870 scope.go:117] "RemoveContainer" containerID="2a8bf46d2ba8cbfb314941ae60e36729f80b774d1f599851dbcef6b771985dd1" Feb 16 17:19:29 crc kubenswrapper[4870]: I0216 17:19:29.707159 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1c107984-3d0e-4627-98a9-0830571e42fa","Type":"ContainerStarted","Data":"d8ebae1a92364fabfce07676fa16aa4cf0cfabbe9c5b5e22aab49dfcac9f9cdf"} Feb 16 17:19:29 crc kubenswrapper[4870]: I0216 17:19:29.711746 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24","Type":"ContainerStarted","Data":"4ca95dcd0743ca4315865a4ac1390f691719ff6fa4be8fb7cb0e4e23ecb5b433"} Feb 16 17:19:29 crc kubenswrapper[4870]: I0216 17:19:29.711893 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:29 crc kubenswrapper[4870]: I0216 17:19:29.730684 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=31.730662003 podStartE2EDuration="31.730662003s" podCreationTimestamp="2026-02-16 17:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:19:29.724775627 +0000 UTC m=+1174.208240011" watchObservedRunningTime="2026-02-16 17:19:29.730662003 +0000 UTC m=+1174.214126387" Feb 16 17:19:29 crc kubenswrapper[4870]: I0216 17:19:29.749070 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=9.670510105 podStartE2EDuration="17.74904689s" podCreationTimestamp="2026-02-16 17:19:12 +0000 UTC" firstStartedPulling="2026-02-16 17:19:20.192026369 +0000 UTC m=+1164.675490753" lastFinishedPulling="2026-02-16 17:19:28.270563154 +0000 UTC m=+1172.754027538" observedRunningTime="2026-02-16 17:19:29.744140502 +0000 UTC m=+1174.227604886" watchObservedRunningTime="2026-02-16 17:19:29.74904689 +0000 UTC m=+1174.232511274" Feb 16 17:19:29 crc kubenswrapper[4870]: I0216 17:19:29.982035 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:29 crc kubenswrapper[4870]: I0216 17:19:29.982109 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.083131 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.233059 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4882424e-0156-4f5d-b6fb-9f7a54d52ded" path="/var/lib/kubelet/pods/4882424e-0156-4f5d-b6fb-9f7a54d52ded/volumes" Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.721616 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"086322f7-5554-4a10-a1be-10622174e27f","Type":"ContainerStarted","Data":"e33da650737dea9303ca3d6a36810621f15b36aa7413a146a05365d2973730fc"} Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.721903 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.723464 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"c7e13f68-6de2-4cf5-b655-77e0c2141ea1","Type":"ContainerStarted","Data":"e2f013867e27931a561f22b9c1fb50bd9edca878fabd63da550469de14bf3099"} Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.723906 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.725982 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe","Type":"ContainerStarted","Data":"dac155608a16f8e2de03570b7fde2e0c03468903cf753303f05ef0faf16425b5"} Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.727849 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"d158e8d5-206e-4289-a1e5-247fddf29a11","Type":"ContainerStarted","Data":"0a3e92cca22a3f89f8b79a6c34163bc6d74001ccc7eddf27d1819bfaabb9c9f0"} Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.727997 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.729250 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d","Type":"ContainerStarted","Data":"127668a9f62c341fbc2af31b8af7ce11a2edf4fa7b652cd628d286ee6609d45e"} Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.731515 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" event={"ID":"e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d","Type":"ContainerStarted","Data":"9a516f5269137c06fa9c968492287663dbaf97ed64e8aacb8a93b06617d4ca25"} Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.731663 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.732976 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" event={"ID":"69d435d2-948e-44d4-b0c2-8e1db0efb383","Type":"ContainerStarted","Data":"d5e60214edf22b53543466e8f30e5d3115760f981393f30e9e2a30f8295b42da"} Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.733146 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.734701 4870 generic.go:334] "Generic (PLEG): container finished" podID="34012c0b-1886-446c-983e-6a1351630186" containerID="c3fa2d1661066c6ddf3e18e71758b13755dafaf88b1a2ba264657d81873bf69b" exitCode=0 Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.734760 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rh6tb" event={"ID":"34012c0b-1886-446c-983e-6a1351630186","Type":"ContainerDied","Data":"c3fa2d1661066c6ddf3e18e71758b13755dafaf88b1a2ba264657d81873bf69b"} Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.740035 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" event={"ID":"3e04ba57-8554-4553-a62f-8b6787ba96dd","Type":"ContainerStarted","Data":"795b3a657de72cf33840c98c830a82bd8df85c57258dfe41085d244549a88db1"} Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.740993 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.742895 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=20.325768986 podStartE2EDuration="29.742881561s" podCreationTimestamp="2026-02-16 17:19:01 +0000 UTC" firstStartedPulling="2026-02-16 17:19:19.943165758 +0000 UTC m=+1164.426630142" lastFinishedPulling="2026-02-16 17:19:29.360278333 +0000 UTC m=+1173.843742717" observedRunningTime="2026-02-16 17:19:30.741206624 +0000 UTC m=+1175.224670998" watchObservedRunningTime="2026-02-16 17:19:30.742881561 +0000 UTC m=+1175.226345945" Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.743412 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ktsg2" event={"ID":"2f4b2faa-7ab7-40c8-a28f-d93749011dbe","Type":"ContainerStarted","Data":"e7e46f7442c8e50572c46759dca515d0a33d47d9b9e54b194f57e548d24d2e15"} Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.743541 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ktsg2" Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.766101 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=10.829114193 podStartE2EDuration="18.766074094s" podCreationTimestamp="2026-02-16 17:19:12 +0000 UTC" firstStartedPulling="2026-02-16 17:19:20.174080425 +0000 UTC m=+1164.657544809" lastFinishedPulling="2026-02-16 17:19:28.111040326 +0000 UTC m=+1172.594504710" observedRunningTime="2026-02-16 17:19:30.762157613 +0000 UTC m=+1175.245622007" watchObservedRunningTime="2026-02-16 17:19:30.766074094 +0000 UTC m=+1175.249538478" Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.799054 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" podStartSLOduration=10.471905512 podStartE2EDuration="18.799033131s" podCreationTimestamp="2026-02-16 17:19:12 +0000 UTC" firstStartedPulling="2026-02-16 17:19:19.943148987 +0000 UTC m=+1164.426613371" lastFinishedPulling="2026-02-16 17:19:28.270276606 +0000 UTC m=+1172.753740990" observedRunningTime="2026-02-16 17:19:30.792739324 +0000 UTC m=+1175.276203708" watchObservedRunningTime="2026-02-16 17:19:30.799033131 +0000 UTC m=+1175.282497515" Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.831673 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" podStartSLOduration=10.504349506 podStartE2EDuration="18.831654459s" podCreationTimestamp="2026-02-16 17:19:12 +0000 UTC" firstStartedPulling="2026-02-16 17:19:19.943156888 +0000 UTC m=+1164.426621272" lastFinishedPulling="2026-02-16 17:19:28.270461841 +0000 UTC m=+1172.753926225" observedRunningTime="2026-02-16 17:19:30.827499702 +0000 UTC m=+1175.310964086" watchObservedRunningTime="2026-02-16 17:19:30.831654459 +0000 UTC m=+1175.315118843" Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.855709 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=10.914498565 podStartE2EDuration="18.855684105s" podCreationTimestamp="2026-02-16 17:19:12 +0000 UTC" firstStartedPulling="2026-02-16 17:19:20.173806817 +0000 UTC m=+1164.657271201" lastFinishedPulling="2026-02-16 17:19:28.114992357 +0000 UTC m=+1172.598456741" observedRunningTime="2026-02-16 17:19:30.853055491 +0000 UTC m=+1175.336519875" watchObservedRunningTime="2026-02-16 17:19:30.855684105 +0000 UTC m=+1175.339148479" Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.882728 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" podStartSLOduration=10.679092321 podStartE2EDuration="18.882702495s" podCreationTimestamp="2026-02-16 17:19:12 +0000 UTC" firstStartedPulling="2026-02-16 17:19:19.943168328 +0000 UTC m=+1164.426632712" lastFinishedPulling="2026-02-16 17:19:28.146778492 +0000 UTC m=+1172.630242886" observedRunningTime="2026-02-16 17:19:30.875502292 +0000 UTC m=+1175.358966676" watchObservedRunningTime="2026-02-16 17:19:30.882702495 +0000 UTC m=+1175.366166879" Feb 16 17:19:30 crc kubenswrapper[4870]: I0216 17:19:30.907579 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ktsg2" podStartSLOduration=17.289871231 podStartE2EDuration="25.907551744s" podCreationTimestamp="2026-02-16 17:19:05 +0000 UTC" firstStartedPulling="2026-02-16 17:19:19.943161558 +0000 UTC m=+1164.426625942" lastFinishedPulling="2026-02-16 17:19:28.560842061 +0000 UTC m=+1173.044306455" observedRunningTime="2026-02-16 17:19:30.907011239 +0000 UTC m=+1175.390475623" watchObservedRunningTime="2026-02-16 17:19:30.907551744 +0000 UTC m=+1175.391016128" Feb 16 17:19:31 crc kubenswrapper[4870]: I0216 17:19:31.755584 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rh6tb" event={"ID":"34012c0b-1886-446c-983e-6a1351630186","Type":"ContainerStarted","Data":"7686628f43cd1f9d240261e9be7db6d3802503d936b3930596c7d0491af7dd88"} Feb 16 17:19:31 crc kubenswrapper[4870]: I0216 17:19:31.756019 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:31 crc kubenswrapper[4870]: I0216 17:19:31.756036 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rh6tb" event={"ID":"34012c0b-1886-446c-983e-6a1351630186","Type":"ContainerStarted","Data":"0d70ccb08d3919c4c0efe4baac7b2c04018630e2b5c928bf62fd0a48124b65bb"} Feb 16 17:19:31 crc kubenswrapper[4870]: I0216 17:19:31.756052 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:19:31 crc kubenswrapper[4870]: I0216 17:19:31.759119 4870 generic.go:334] "Generic (PLEG): container finished" podID="79e9de5e-117f-4d5e-bfee-bad481a8c0b8" containerID="921ebdc2ff4a3aaf9a00477ffa64be6180cd53f7d4033bc11eda12fa7cf9c4eb" exitCode=0 Feb 16 17:19:31 crc kubenswrapper[4870]: I0216 17:19:31.759299 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"79e9de5e-117f-4d5e-bfee-bad481a8c0b8","Type":"ContainerDied","Data":"921ebdc2ff4a3aaf9a00477ffa64be6180cd53f7d4033bc11eda12fa7cf9c4eb"} Feb 16 17:19:31 crc kubenswrapper[4870]: I0216 17:19:31.784917 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-rh6tb" podStartSLOduration=20.07500217 podStartE2EDuration="26.784894288s" podCreationTimestamp="2026-02-16 17:19:05 +0000 UTC" firstStartedPulling="2026-02-16 17:19:21.78508937 +0000 UTC m=+1166.268553754" lastFinishedPulling="2026-02-16 17:19:28.494981488 +0000 UTC m=+1172.978445872" observedRunningTime="2026-02-16 17:19:31.774643679 +0000 UTC m=+1176.258108063" watchObservedRunningTime="2026-02-16 17:19:31.784894288 +0000 UTC m=+1176.268358672" Feb 16 17:19:31 crc kubenswrapper[4870]: E0216 17:19:31.912208 4870 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod523f77f6_c829_4d3d_99c1_45bafcb30ee3.slice/crio-conmon-3f00fdf72c01fcfa772f386ad93e343124a9ed164f27f4e1851ac3ab6b7344e6.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:19:32 crc kubenswrapper[4870]: I0216 17:19:32.313152 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-7r7pv"] Feb 16 17:19:32 crc kubenswrapper[4870]: E0216 17:19:32.313570 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4882424e-0156-4f5d-b6fb-9f7a54d52ded" containerName="init" Feb 16 17:19:32 crc kubenswrapper[4870]: I0216 17:19:32.313587 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="4882424e-0156-4f5d-b6fb-9f7a54d52ded" containerName="init" Feb 16 17:19:32 crc kubenswrapper[4870]: E0216 17:19:32.313599 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4882424e-0156-4f5d-b6fb-9f7a54d52ded" containerName="dnsmasq-dns" Feb 16 17:19:32 crc kubenswrapper[4870]: I0216 17:19:32.313607 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="4882424e-0156-4f5d-b6fb-9f7a54d52ded" containerName="dnsmasq-dns" Feb 16 17:19:32 crc kubenswrapper[4870]: I0216 17:19:32.313790 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="4882424e-0156-4f5d-b6fb-9f7a54d52ded" containerName="dnsmasq-dns" Feb 16 17:19:32 crc kubenswrapper[4870]: I0216 17:19:32.314813 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-7r7pv" Feb 16 17:19:32 crc kubenswrapper[4870]: I0216 17:19:32.322446 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-7r7pv"] Feb 16 17:19:32 crc kubenswrapper[4870]: I0216 17:19:32.454840 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2kjf\" (UniqueName: \"kubernetes.io/projected/0d529f47-d25e-457d-8534-b756432ce6b3-kube-api-access-c2kjf\") pod \"dnsmasq-dns-7cb5889db5-7r7pv\" (UID: \"0d529f47-d25e-457d-8534-b756432ce6b3\") " pod="openstack/dnsmasq-dns-7cb5889db5-7r7pv" Feb 16 17:19:32 crc kubenswrapper[4870]: I0216 17:19:32.455214 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d529f47-d25e-457d-8534-b756432ce6b3-config\") pod \"dnsmasq-dns-7cb5889db5-7r7pv\" (UID: \"0d529f47-d25e-457d-8534-b756432ce6b3\") " pod="openstack/dnsmasq-dns-7cb5889db5-7r7pv" Feb 16 17:19:32 crc kubenswrapper[4870]: I0216 17:19:32.455300 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d529f47-d25e-457d-8534-b756432ce6b3-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-7r7pv\" (UID: \"0d529f47-d25e-457d-8534-b756432ce6b3\") " pod="openstack/dnsmasq-dns-7cb5889db5-7r7pv" Feb 16 17:19:32 crc kubenswrapper[4870]: I0216 17:19:32.556879 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d529f47-d25e-457d-8534-b756432ce6b3-config\") pod \"dnsmasq-dns-7cb5889db5-7r7pv\" (UID: \"0d529f47-d25e-457d-8534-b756432ce6b3\") " pod="openstack/dnsmasq-dns-7cb5889db5-7r7pv" Feb 16 17:19:32 crc kubenswrapper[4870]: I0216 17:19:32.556989 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d529f47-d25e-457d-8534-b756432ce6b3-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-7r7pv\" (UID: \"0d529f47-d25e-457d-8534-b756432ce6b3\") " pod="openstack/dnsmasq-dns-7cb5889db5-7r7pv" Feb 16 17:19:32 crc kubenswrapper[4870]: I0216 17:19:32.557025 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2kjf\" (UniqueName: \"kubernetes.io/projected/0d529f47-d25e-457d-8534-b756432ce6b3-kube-api-access-c2kjf\") pod \"dnsmasq-dns-7cb5889db5-7r7pv\" (UID: \"0d529f47-d25e-457d-8534-b756432ce6b3\") " pod="openstack/dnsmasq-dns-7cb5889db5-7r7pv" Feb 16 17:19:32 crc kubenswrapper[4870]: I0216 17:19:32.557927 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d529f47-d25e-457d-8534-b756432ce6b3-config\") pod \"dnsmasq-dns-7cb5889db5-7r7pv\" (UID: \"0d529f47-d25e-457d-8534-b756432ce6b3\") " pod="openstack/dnsmasq-dns-7cb5889db5-7r7pv" Feb 16 17:19:32 crc kubenswrapper[4870]: I0216 17:19:32.558044 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d529f47-d25e-457d-8534-b756432ce6b3-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-7r7pv\" (UID: \"0d529f47-d25e-457d-8534-b756432ce6b3\") " pod="openstack/dnsmasq-dns-7cb5889db5-7r7pv" Feb 16 17:19:32 crc kubenswrapper[4870]: I0216 17:19:32.586011 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2kjf\" (UniqueName: \"kubernetes.io/projected/0d529f47-d25e-457d-8534-b756432ce6b3-kube-api-access-c2kjf\") pod \"dnsmasq-dns-7cb5889db5-7r7pv\" (UID: \"0d529f47-d25e-457d-8534-b756432ce6b3\") " pod="openstack/dnsmasq-dns-7cb5889db5-7r7pv" Feb 16 17:19:32 crc kubenswrapper[4870]: I0216 17:19:32.646215 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-7r7pv" Feb 16 17:19:32 crc kubenswrapper[4870]: I0216 17:19:32.766449 4870 generic.go:334] "Generic (PLEG): container finished" podID="523f77f6-c829-4d3d-99c1-45bafcb30ee3" containerID="3f00fdf72c01fcfa772f386ad93e343124a9ed164f27f4e1851ac3ab6b7344e6" exitCode=0 Feb 16 17:19:32 crc kubenswrapper[4870]: I0216 17:19:32.766497 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"523f77f6-c829-4d3d-99c1-45bafcb30ee3","Type":"ContainerDied","Data":"3f00fdf72c01fcfa772f386ad93e343124a9ed164f27f4e1851ac3ab6b7344e6"} Feb 16 17:19:33 crc kubenswrapper[4870]: W0216 17:19:33.295317 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d529f47_d25e_457d_8534_b756432ce6b3.slice/crio-f2faea5fa644f277c96637fa7d42b48bfed95e88bd8d8c356a3b132c9ee6001e WatchSource:0}: Error finding container f2faea5fa644f277c96637fa7d42b48bfed95e88bd8d8c356a3b132c9ee6001e: Status 404 returned error can't find the container with id f2faea5fa644f277c96637fa7d42b48bfed95e88bd8d8c356a3b132c9ee6001e Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.298520 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-7r7pv"] Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.437635 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.457811 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.460329 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.461066 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.462442 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.464872 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.464994 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-cgsjf" Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.576695 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/669a24d2-3e17-4ce1-aba2-c45d2a92683a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.576817 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nls87\" (UniqueName: \"kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-kube-api-access-nls87\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.576861 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/669a24d2-3e17-4ce1-aba2-c45d2a92683a-cache\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.577074 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/669a24d2-3e17-4ce1-aba2-c45d2a92683a-lock\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.577106 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9ad2f3f5-e65e-40cb-bca7-4afdd099d9df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9ad2f3f5-e65e-40cb-bca7-4afdd099d9df\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.577132 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-etc-swift\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.678799 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/669a24d2-3e17-4ce1-aba2-c45d2a92683a-lock\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.678843 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9ad2f3f5-e65e-40cb-bca7-4afdd099d9df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9ad2f3f5-e65e-40cb-bca7-4afdd099d9df\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.678866 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-etc-swift\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.678908 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/669a24d2-3e17-4ce1-aba2-c45d2a92683a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.678933 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nls87\" (UniqueName: \"kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-kube-api-access-nls87\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.678976 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/669a24d2-3e17-4ce1-aba2-c45d2a92683a-cache\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:33 crc kubenswrapper[4870]: E0216 17:19:33.679459 4870 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.679562 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/669a24d2-3e17-4ce1-aba2-c45d2a92683a-cache\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:33 crc kubenswrapper[4870]: E0216 17:19:33.679570 4870 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 17:19:33 crc kubenswrapper[4870]: E0216 17:19:33.679635 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-etc-swift podName:669a24d2-3e17-4ce1-aba2-c45d2a92683a nodeName:}" failed. No retries permitted until 2026-02-16 17:19:34.179619734 +0000 UTC m=+1178.663084118 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-etc-swift") pod "swift-storage-0" (UID: "669a24d2-3e17-4ce1-aba2-c45d2a92683a") : configmap "swift-ring-files" not found Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.679493 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/669a24d2-3e17-4ce1-aba2-c45d2a92683a-lock\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.681681 4870 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.681720 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9ad2f3f5-e65e-40cb-bca7-4afdd099d9df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9ad2f3f5-e65e-40cb-bca7-4afdd099d9df\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/59adfd8f604a9977dfda682dde3d9778093a32fca7a7e88894c1489d1f8e8752/globalmount\"" pod="openstack/swift-storage-0" Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.683869 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/669a24d2-3e17-4ce1-aba2-c45d2a92683a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.697979 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nls87\" (UniqueName: \"kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-kube-api-access-nls87\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.727282 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9ad2f3f5-e65e-40cb-bca7-4afdd099d9df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9ad2f3f5-e65e-40cb-bca7-4afdd099d9df\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:33 crc kubenswrapper[4870]: I0216 17:19:33.776281 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-7r7pv" event={"ID":"0d529f47-d25e-457d-8534-b756432ce6b3","Type":"ContainerStarted","Data":"f2faea5fa644f277c96637fa7d42b48bfed95e88bd8d8c356a3b132c9ee6001e"} Feb 16 17:19:34 crc kubenswrapper[4870]: I0216 17:19:34.187892 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-etc-swift\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:34 crc kubenswrapper[4870]: E0216 17:19:34.188140 4870 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 17:19:34 crc kubenswrapper[4870]: E0216 17:19:34.188166 4870 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 17:19:34 crc kubenswrapper[4870]: E0216 17:19:34.188238 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-etc-swift podName:669a24d2-3e17-4ce1-aba2-c45d2a92683a nodeName:}" failed. No retries permitted until 2026-02-16 17:19:35.188215343 +0000 UTC m=+1179.671679727 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-etc-swift") pod "swift-storage-0" (UID: "669a24d2-3e17-4ce1-aba2-c45d2a92683a") : configmap "swift-ring-files" not found Feb 16 17:19:34 crc kubenswrapper[4870]: I0216 17:19:34.788822 4870 generic.go:334] "Generic (PLEG): container finished" podID="0d529f47-d25e-457d-8534-b756432ce6b3" containerID="ed9a5ffde5e59c41fb6020925585e0c6fc03bc47119fa661bd30a8ad73db507a" exitCode=0 Feb 16 17:19:34 crc kubenswrapper[4870]: I0216 17:19:34.788886 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-7r7pv" event={"ID":"0d529f47-d25e-457d-8534-b756432ce6b3","Type":"ContainerDied","Data":"ed9a5ffde5e59c41fb6020925585e0c6fc03bc47119fa661bd30a8ad73db507a"} Feb 16 17:19:34 crc kubenswrapper[4870]: I0216 17:19:34.793874 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d","Type":"ContainerStarted","Data":"209cd7ccea53df23cd2aed28e65a537360e9270a2841ad1a2f770a2b14856a35"} Feb 16 17:19:34 crc kubenswrapper[4870]: I0216 17:19:34.795324 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe","Type":"ContainerStarted","Data":"3833a730448a9233edc6f52de08cfa4aa9b1edaf1015b451efb6ffb1d61f8555"} Feb 16 17:19:34 crc kubenswrapper[4870]: I0216 17:19:34.845313 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=17.927556367 podStartE2EDuration="30.84529235s" podCreationTimestamp="2026-02-16 17:19:04 +0000 UTC" firstStartedPulling="2026-02-16 17:19:19.966658639 +0000 UTC m=+1164.450123033" lastFinishedPulling="2026-02-16 17:19:32.884394632 +0000 UTC m=+1177.367859016" observedRunningTime="2026-02-16 17:19:34.843403507 +0000 UTC m=+1179.326867901" watchObservedRunningTime="2026-02-16 17:19:34.84529235 +0000 UTC m=+1179.328756734" Feb 16 17:19:34 crc kubenswrapper[4870]: I0216 17:19:34.849073 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:34 crc kubenswrapper[4870]: I0216 17:19:34.886485 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=13.394229067 podStartE2EDuration="25.886460168s" podCreationTimestamp="2026-02-16 17:19:09 +0000 UTC" firstStartedPulling="2026-02-16 17:19:20.394455165 +0000 UTC m=+1164.877919549" lastFinishedPulling="2026-02-16 17:19:32.886686266 +0000 UTC m=+1177.370150650" observedRunningTime="2026-02-16 17:19:34.87515243 +0000 UTC m=+1179.358616814" watchObservedRunningTime="2026-02-16 17:19:34.886460168 +0000 UTC m=+1179.369924552" Feb 16 17:19:34 crc kubenswrapper[4870]: I0216 17:19:34.906867 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:35 crc kubenswrapper[4870]: I0216 17:19:35.207741 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-etc-swift\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:35 crc kubenswrapper[4870]: E0216 17:19:35.207909 4870 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 17:19:35 crc kubenswrapper[4870]: E0216 17:19:35.207933 4870 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 17:19:35 crc kubenswrapper[4870]: E0216 17:19:35.208029 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-etc-swift podName:669a24d2-3e17-4ce1-aba2-c45d2a92683a nodeName:}" failed. No retries permitted until 2026-02-16 17:19:37.207994305 +0000 UTC m=+1181.691458689 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-etc-swift") pod "swift-storage-0" (UID: "669a24d2-3e17-4ce1-aba2-c45d2a92683a") : configmap "swift-ring-files" not found Feb 16 17:19:35 crc kubenswrapper[4870]: I0216 17:19:35.804817 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:35 crc kubenswrapper[4870]: I0216 17:19:35.842785 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.002453 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.002512 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.064532 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.132690 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-7r7pv"] Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.166344 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-tvm2g"] Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.167894 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-tvm2g" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.169154 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.174041 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.182470 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-tvm2g"] Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.200773 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-b8jsv"] Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.202567 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-b8jsv" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.209847 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.221440 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-b8jsv"] Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.233038 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-config\") pod \"dnsmasq-dns-6c89d5d749-tvm2g\" (UID: \"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f\") " pod="openstack/dnsmasq-dns-6c89d5d749-tvm2g" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.233291 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-tvm2g\" (UID: \"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f\") " pod="openstack/dnsmasq-dns-6c89d5d749-tvm2g" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.233345 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l49hg\" (UniqueName: \"kubernetes.io/projected/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-kube-api-access-l49hg\") pod \"dnsmasq-dns-6c89d5d749-tvm2g\" (UID: \"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f\") " pod="openstack/dnsmasq-dns-6c89d5d749-tvm2g" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.233473 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-tvm2g\" (UID: \"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f\") " pod="openstack/dnsmasq-dns-6c89d5d749-tvm2g" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.296701 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.344484 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a6713b2c-65e7-42c3-8cdc-4ef240f57ee1-ovs-rundir\") pod \"ovn-controller-metrics-b8jsv\" (UID: \"a6713b2c-65e7-42c3-8cdc-4ef240f57ee1\") " pod="openstack/ovn-controller-metrics-b8jsv" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.344878 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6713b2c-65e7-42c3-8cdc-4ef240f57ee1-config\") pod \"ovn-controller-metrics-b8jsv\" (UID: \"a6713b2c-65e7-42c3-8cdc-4ef240f57ee1\") " pod="openstack/ovn-controller-metrics-b8jsv" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.345077 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6713b2c-65e7-42c3-8cdc-4ef240f57ee1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b8jsv\" (UID: \"a6713b2c-65e7-42c3-8cdc-4ef240f57ee1\") " pod="openstack/ovn-controller-metrics-b8jsv" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.345152 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-config\") pod \"dnsmasq-dns-6c89d5d749-tvm2g\" (UID: \"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f\") " pod="openstack/dnsmasq-dns-6c89d5d749-tvm2g" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.345230 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6713b2c-65e7-42c3-8cdc-4ef240f57ee1-combined-ca-bundle\") pod \"ovn-controller-metrics-b8jsv\" (UID: \"a6713b2c-65e7-42c3-8cdc-4ef240f57ee1\") " pod="openstack/ovn-controller-metrics-b8jsv" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.345374 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-tvm2g\" (UID: \"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f\") " pod="openstack/dnsmasq-dns-6c89d5d749-tvm2g" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.345405 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a6713b2c-65e7-42c3-8cdc-4ef240f57ee1-ovn-rundir\") pod \"ovn-controller-metrics-b8jsv\" (UID: \"a6713b2c-65e7-42c3-8cdc-4ef240f57ee1\") " pod="openstack/ovn-controller-metrics-b8jsv" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.345456 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l49hg\" (UniqueName: \"kubernetes.io/projected/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-kube-api-access-l49hg\") pod \"dnsmasq-dns-6c89d5d749-tvm2g\" (UID: \"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f\") " pod="openstack/dnsmasq-dns-6c89d5d749-tvm2g" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.345557 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cbfz\" (UniqueName: \"kubernetes.io/projected/a6713b2c-65e7-42c3-8cdc-4ef240f57ee1-kube-api-access-4cbfz\") pod \"ovn-controller-metrics-b8jsv\" (UID: \"a6713b2c-65e7-42c3-8cdc-4ef240f57ee1\") " pod="openstack/ovn-controller-metrics-b8jsv" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.345629 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-tvm2g\" (UID: \"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f\") " pod="openstack/dnsmasq-dns-6c89d5d749-tvm2g" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.347248 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-config\") pod \"dnsmasq-dns-6c89d5d749-tvm2g\" (UID: \"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f\") " pod="openstack/dnsmasq-dns-6c89d5d749-tvm2g" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.348618 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-tvm2g\" (UID: \"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f\") " pod="openstack/dnsmasq-dns-6c89d5d749-tvm2g" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.349208 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-tvm2g\" (UID: \"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f\") " pod="openstack/dnsmasq-dns-6c89d5d749-tvm2g" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.406865 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l49hg\" (UniqueName: \"kubernetes.io/projected/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-kube-api-access-l49hg\") pod \"dnsmasq-dns-6c89d5d749-tvm2g\" (UID: \"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f\") " pod="openstack/dnsmasq-dns-6c89d5d749-tvm2g" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.432846 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-tvm2g"] Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.433701 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-tvm2g" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.451132 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a6713b2c-65e7-42c3-8cdc-4ef240f57ee1-ovs-rundir\") pod \"ovn-controller-metrics-b8jsv\" (UID: \"a6713b2c-65e7-42c3-8cdc-4ef240f57ee1\") " pod="openstack/ovn-controller-metrics-b8jsv" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.451173 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6713b2c-65e7-42c3-8cdc-4ef240f57ee1-config\") pod \"ovn-controller-metrics-b8jsv\" (UID: \"a6713b2c-65e7-42c3-8cdc-4ef240f57ee1\") " pod="openstack/ovn-controller-metrics-b8jsv" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.451202 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6713b2c-65e7-42c3-8cdc-4ef240f57ee1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b8jsv\" (UID: \"a6713b2c-65e7-42c3-8cdc-4ef240f57ee1\") " pod="openstack/ovn-controller-metrics-b8jsv" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.451240 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6713b2c-65e7-42c3-8cdc-4ef240f57ee1-combined-ca-bundle\") pod \"ovn-controller-metrics-b8jsv\" (UID: \"a6713b2c-65e7-42c3-8cdc-4ef240f57ee1\") " pod="openstack/ovn-controller-metrics-b8jsv" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.451291 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a6713b2c-65e7-42c3-8cdc-4ef240f57ee1-ovn-rundir\") pod \"ovn-controller-metrics-b8jsv\" (UID: \"a6713b2c-65e7-42c3-8cdc-4ef240f57ee1\") " pod="openstack/ovn-controller-metrics-b8jsv" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.451337 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cbfz\" (UniqueName: \"kubernetes.io/projected/a6713b2c-65e7-42c3-8cdc-4ef240f57ee1-kube-api-access-4cbfz\") pod \"ovn-controller-metrics-b8jsv\" (UID: \"a6713b2c-65e7-42c3-8cdc-4ef240f57ee1\") " pod="openstack/ovn-controller-metrics-b8jsv" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.451465 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a6713b2c-65e7-42c3-8cdc-4ef240f57ee1-ovs-rundir\") pod \"ovn-controller-metrics-b8jsv\" (UID: \"a6713b2c-65e7-42c3-8cdc-4ef240f57ee1\") " pod="openstack/ovn-controller-metrics-b8jsv" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.451991 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a6713b2c-65e7-42c3-8cdc-4ef240f57ee1-ovn-rundir\") pod \"ovn-controller-metrics-b8jsv\" (UID: \"a6713b2c-65e7-42c3-8cdc-4ef240f57ee1\") " pod="openstack/ovn-controller-metrics-b8jsv" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.452639 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6713b2c-65e7-42c3-8cdc-4ef240f57ee1-config\") pod \"ovn-controller-metrics-b8jsv\" (UID: \"a6713b2c-65e7-42c3-8cdc-4ef240f57ee1\") " pod="openstack/ovn-controller-metrics-b8jsv" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.462480 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6713b2c-65e7-42c3-8cdc-4ef240f57ee1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b8jsv\" (UID: \"a6713b2c-65e7-42c3-8cdc-4ef240f57ee1\") " pod="openstack/ovn-controller-metrics-b8jsv" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.466468 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6713b2c-65e7-42c3-8cdc-4ef240f57ee1-combined-ca-bundle\") pod \"ovn-controller-metrics-b8jsv\" (UID: \"a6713b2c-65e7-42c3-8cdc-4ef240f57ee1\") " pod="openstack/ovn-controller-metrics-b8jsv" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.477534 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cbfz\" (UniqueName: \"kubernetes.io/projected/a6713b2c-65e7-42c3-8cdc-4ef240f57ee1-kube-api-access-4cbfz\") pod \"ovn-controller-metrics-b8jsv\" (UID: \"a6713b2c-65e7-42c3-8cdc-4ef240f57ee1\") " pod="openstack/ovn-controller-metrics-b8jsv" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.484027 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-xf2lr"] Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.485977 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-xf2lr" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.490507 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.499562 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xf2lr"] Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.541298 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-b8jsv" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.552967 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-xf2lr\" (UID: \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\") " pod="openstack/dnsmasq-dns-698758b865-xf2lr" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.553014 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-config\") pod \"dnsmasq-dns-698758b865-xf2lr\" (UID: \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\") " pod="openstack/dnsmasq-dns-698758b865-xf2lr" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.553081 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xr4n\" (UniqueName: \"kubernetes.io/projected/c3bc8c41-0b58-4a15-adf0-698dcbf23806-kube-api-access-9xr4n\") pod \"dnsmasq-dns-698758b865-xf2lr\" (UID: \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\") " pod="openstack/dnsmasq-dns-698758b865-xf2lr" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.553178 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-xf2lr\" (UID: \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\") " pod="openstack/dnsmasq-dns-698758b865-xf2lr" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.553216 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-dns-svc\") pod \"dnsmasq-dns-698758b865-xf2lr\" (UID: \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\") " pod="openstack/dnsmasq-dns-698758b865-xf2lr" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.655480 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-xf2lr\" (UID: \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\") " pod="openstack/dnsmasq-dns-698758b865-xf2lr" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.655554 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-dns-svc\") pod \"dnsmasq-dns-698758b865-xf2lr\" (UID: \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\") " pod="openstack/dnsmasq-dns-698758b865-xf2lr" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.655621 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-xf2lr\" (UID: \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\") " pod="openstack/dnsmasq-dns-698758b865-xf2lr" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.655653 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-config\") pod \"dnsmasq-dns-698758b865-xf2lr\" (UID: \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\") " pod="openstack/dnsmasq-dns-698758b865-xf2lr" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.655692 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xr4n\" (UniqueName: \"kubernetes.io/projected/c3bc8c41-0b58-4a15-adf0-698dcbf23806-kube-api-access-9xr4n\") pod \"dnsmasq-dns-698758b865-xf2lr\" (UID: \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\") " pod="openstack/dnsmasq-dns-698758b865-xf2lr" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.656419 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-dns-svc\") pod \"dnsmasq-dns-698758b865-xf2lr\" (UID: \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\") " pod="openstack/dnsmasq-dns-698758b865-xf2lr" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.656912 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-xf2lr\" (UID: \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\") " pod="openstack/dnsmasq-dns-698758b865-xf2lr" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.657177 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-config\") pod \"dnsmasq-dns-698758b865-xf2lr\" (UID: \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\") " pod="openstack/dnsmasq-dns-698758b865-xf2lr" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.657715 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-xf2lr\" (UID: \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\") " pod="openstack/dnsmasq-dns-698758b865-xf2lr" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.682677 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xr4n\" (UniqueName: \"kubernetes.io/projected/c3bc8c41-0b58-4a15-adf0-698dcbf23806-kube-api-access-9xr4n\") pod \"dnsmasq-dns-698758b865-xf2lr\" (UID: \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\") " pod="openstack/dnsmasq-dns-698758b865-xf2lr" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.819828 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-7r7pv" event={"ID":"0d529f47-d25e-457d-8534-b756432ce6b3","Type":"ContainerStarted","Data":"ef62f6d9d5c4b088d04dbe4a1decbd21e456c7951827047e2edd2bf9f744b324"} Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.820709 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-7r7pv" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.848276 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-7r7pv" podStartSLOduration=4.848247701 podStartE2EDuration="4.848247701s" podCreationTimestamp="2026-02-16 17:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:19:36.840688599 +0000 UTC m=+1181.324152983" watchObservedRunningTime="2026-02-16 17:19:36.848247701 +0000 UTC m=+1181.331712075" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.850032 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-xf2lr" Feb 16 17:19:36 crc kubenswrapper[4870]: I0216 17:19:36.882810 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.115161 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.121581 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.128441 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.128788 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.128999 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.129246 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-fp759" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.152789 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.165616 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwwmm\" (UniqueName: \"kubernetes.io/projected/9b2d9aae-f384-4c40-adfb-35224530b735-kube-api-access-fwwmm\") pod \"ovn-northd-0\" (UID: \"9b2d9aae-f384-4c40-adfb-35224530b735\") " pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.165704 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2d9aae-f384-4c40-adfb-35224530b735-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9b2d9aae-f384-4c40-adfb-35224530b735\") " pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.165735 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2d9aae-f384-4c40-adfb-35224530b735-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9b2d9aae-f384-4c40-adfb-35224530b735\") " pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.165835 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9b2d9aae-f384-4c40-adfb-35224530b735-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9b2d9aae-f384-4c40-adfb-35224530b735\") " pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.165870 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b2d9aae-f384-4c40-adfb-35224530b735-config\") pod \"ovn-northd-0\" (UID: \"9b2d9aae-f384-4c40-adfb-35224530b735\") " pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.165904 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2d9aae-f384-4c40-adfb-35224530b735-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9b2d9aae-f384-4c40-adfb-35224530b735\") " pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.165938 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b2d9aae-f384-4c40-adfb-35224530b735-scripts\") pod \"ovn-northd-0\" (UID: \"9b2d9aae-f384-4c40-adfb-35224530b735\") " pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.267643 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b2d9aae-f384-4c40-adfb-35224530b735-scripts\") pod \"ovn-northd-0\" (UID: \"9b2d9aae-f384-4c40-adfb-35224530b735\") " pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.267795 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwwmm\" (UniqueName: \"kubernetes.io/projected/9b2d9aae-f384-4c40-adfb-35224530b735-kube-api-access-fwwmm\") pod \"ovn-northd-0\" (UID: \"9b2d9aae-f384-4c40-adfb-35224530b735\") " pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.267872 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2d9aae-f384-4c40-adfb-35224530b735-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9b2d9aae-f384-4c40-adfb-35224530b735\") " pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.267901 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2d9aae-f384-4c40-adfb-35224530b735-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9b2d9aae-f384-4c40-adfb-35224530b735\") " pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.268029 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-etc-swift\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.268069 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9b2d9aae-f384-4c40-adfb-35224530b735-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9b2d9aae-f384-4c40-adfb-35224530b735\") " pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.268129 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b2d9aae-f384-4c40-adfb-35224530b735-config\") pod \"ovn-northd-0\" (UID: \"9b2d9aae-f384-4c40-adfb-35224530b735\") " pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.268191 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2d9aae-f384-4c40-adfb-35224530b735-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9b2d9aae-f384-4c40-adfb-35224530b735\") " pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.268724 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b2d9aae-f384-4c40-adfb-35224530b735-scripts\") pod \"ovn-northd-0\" (UID: \"9b2d9aae-f384-4c40-adfb-35224530b735\") " pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.269695 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9b2d9aae-f384-4c40-adfb-35224530b735-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9b2d9aae-f384-4c40-adfb-35224530b735\") " pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: E0216 17:19:37.269796 4870 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 17:19:37 crc kubenswrapper[4870]: E0216 17:19:37.269808 4870 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 17:19:37 crc kubenswrapper[4870]: E0216 17:19:37.269844 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-etc-swift podName:669a24d2-3e17-4ce1-aba2-c45d2a92683a nodeName:}" failed. No retries permitted until 2026-02-16 17:19:41.269829623 +0000 UTC m=+1185.753294007 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-etc-swift") pod "swift-storage-0" (UID: "669a24d2-3e17-4ce1-aba2-c45d2a92683a") : configmap "swift-ring-files" not found Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.270827 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b2d9aae-f384-4c40-adfb-35224530b735-config\") pod \"ovn-northd-0\" (UID: \"9b2d9aae-f384-4c40-adfb-35224530b735\") " pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.276605 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2d9aae-f384-4c40-adfb-35224530b735-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9b2d9aae-f384-4c40-adfb-35224530b735\") " pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.276609 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2d9aae-f384-4c40-adfb-35224530b735-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9b2d9aae-f384-4c40-adfb-35224530b735\") " pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.278471 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b2d9aae-f384-4c40-adfb-35224530b735-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9b2d9aae-f384-4c40-adfb-35224530b735\") " pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.288714 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwwmm\" (UniqueName: \"kubernetes.io/projected/9b2d9aae-f384-4c40-adfb-35224530b735-kube-api-access-fwwmm\") pod \"ovn-northd-0\" (UID: \"9b2d9aae-f384-4c40-adfb-35224530b735\") " pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.447180 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.451933 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-gnnq2"] Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.453616 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.460417 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.460644 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.460680 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.461970 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gnnq2"] Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.474288 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-dispersionconf\") pod \"swift-ring-rebalance-gnnq2\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.474403 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-etc-swift\") pod \"swift-ring-rebalance-gnnq2\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.474446 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-scripts\") pod \"swift-ring-rebalance-gnnq2\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.474464 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-swiftconf\") pod \"swift-ring-rebalance-gnnq2\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.474500 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x699\" (UniqueName: \"kubernetes.io/projected/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-kube-api-access-8x699\") pod \"swift-ring-rebalance-gnnq2\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.474527 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-ring-data-devices\") pod \"swift-ring-rebalance-gnnq2\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.474569 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-combined-ca-bundle\") pod \"swift-ring-rebalance-gnnq2\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.575458 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-etc-swift\") pod \"swift-ring-rebalance-gnnq2\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.575516 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-scripts\") pod \"swift-ring-rebalance-gnnq2\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.575535 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-swiftconf\") pod \"swift-ring-rebalance-gnnq2\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.575573 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x699\" (UniqueName: \"kubernetes.io/projected/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-kube-api-access-8x699\") pod \"swift-ring-rebalance-gnnq2\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.575596 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-ring-data-devices\") pod \"swift-ring-rebalance-gnnq2\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.575634 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-combined-ca-bundle\") pod \"swift-ring-rebalance-gnnq2\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.575675 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-dispersionconf\") pod \"swift-ring-rebalance-gnnq2\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.577668 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-etc-swift\") pod \"swift-ring-rebalance-gnnq2\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.577688 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-ring-data-devices\") pod \"swift-ring-rebalance-gnnq2\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.578302 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-scripts\") pod \"swift-ring-rebalance-gnnq2\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.579109 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-combined-ca-bundle\") pod \"swift-ring-rebalance-gnnq2\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.579758 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-swiftconf\") pod \"swift-ring-rebalance-gnnq2\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.581760 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-dispersionconf\") pod \"swift-ring-rebalance-gnnq2\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.605472 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x699\" (UniqueName: \"kubernetes.io/projected/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-kube-api-access-8x699\") pod \"swift-ring-rebalance-gnnq2\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.782699 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:37 crc kubenswrapper[4870]: I0216 17:19:37.829917 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-7r7pv" podUID="0d529f47-d25e-457d-8534-b756432ce6b3" containerName="dnsmasq-dns" containerID="cri-o://ef62f6d9d5c4b088d04dbe4a1decbd21e456c7951827047e2edd2bf9f744b324" gracePeriod=10 Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.471747 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-b8jsv"] Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.543515 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.543830 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 16 17:19:38 crc kubenswrapper[4870]: W0216 17:19:38.579549 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdf9770f_4fe7_4b42_9968_4fc4461ef6aa.slice/crio-1a131e7621c652f43905e4373127e03802ff08ad775d473f75b4cddb2534fc6b WatchSource:0}: Error finding container 1a131e7621c652f43905e4373127e03802ff08ad775d473f75b4cddb2534fc6b: Status 404 returned error can't find the container with id 1a131e7621c652f43905e4373127e03802ff08ad775d473f75b4cddb2534fc6b Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.585784 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gnnq2"] Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.638711 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-7r7pv" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.658551 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.729711 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-b4r87"] Feb 16 17:19:38 crc kubenswrapper[4870]: E0216 17:19:38.730154 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d529f47-d25e-457d-8534-b756432ce6b3" containerName="init" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.730175 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d529f47-d25e-457d-8534-b756432ce6b3" containerName="init" Feb 16 17:19:38 crc kubenswrapper[4870]: E0216 17:19:38.730199 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d529f47-d25e-457d-8534-b756432ce6b3" containerName="dnsmasq-dns" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.730206 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d529f47-d25e-457d-8534-b756432ce6b3" containerName="dnsmasq-dns" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.730394 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d529f47-d25e-457d-8534-b756432ce6b3" containerName="dnsmasq-dns" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.731205 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b4r87" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.733457 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.743571 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-b4r87"] Feb 16 17:19:38 crc kubenswrapper[4870]: W0216 17:19:38.797778 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3bc8c41_0b58_4a15_adf0_698dcbf23806.slice/crio-d9d3343b6360dd600a7ac6638a5dc0b34467353d6fc864da5380d6b8c65c1541 WatchSource:0}: Error finding container d9d3343b6360dd600a7ac6638a5dc0b34467353d6fc864da5380d6b8c65c1541: Status 404 returned error can't find the container with id d9d3343b6360dd600a7ac6638a5dc0b34467353d6fc864da5380d6b8c65c1541 Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.803449 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xf2lr"] Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.817789 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-tvm2g"] Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.820133 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d529f47-d25e-457d-8534-b756432ce6b3-dns-svc\") pod \"0d529f47-d25e-457d-8534-b756432ce6b3\" (UID: \"0d529f47-d25e-457d-8534-b756432ce6b3\") " Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.820286 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d529f47-d25e-457d-8534-b756432ce6b3-config\") pod \"0d529f47-d25e-457d-8534-b756432ce6b3\" (UID: \"0d529f47-d25e-457d-8534-b756432ce6b3\") " Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.820360 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2kjf\" (UniqueName: \"kubernetes.io/projected/0d529f47-d25e-457d-8534-b756432ce6b3-kube-api-access-c2kjf\") pod \"0d529f47-d25e-457d-8534-b756432ce6b3\" (UID: \"0d529f47-d25e-457d-8534-b756432ce6b3\") " Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.827306 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d529f47-d25e-457d-8534-b756432ce6b3-kube-api-access-c2kjf" (OuterVolumeSpecName: "kube-api-access-c2kjf") pod "0d529f47-d25e-457d-8534-b756432ce6b3" (UID: "0d529f47-d25e-457d-8534-b756432ce6b3"). InnerVolumeSpecName "kube-api-access-c2kjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.827348 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.844732 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"79e9de5e-117f-4d5e-bfee-bad481a8c0b8","Type":"ContainerStarted","Data":"3604eeaf895dd8b5d336f20fc17c0884ed06c95f807db7fa747b6e3b31eef19c"} Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.846232 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9b2d9aae-f384-4c40-adfb-35224530b735","Type":"ContainerStarted","Data":"5cb50a35f65aadd92380361527da68f233c414cb4514ee304bd2e48e30871591"} Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.847647 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gnnq2" event={"ID":"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa","Type":"ContainerStarted","Data":"1a131e7621c652f43905e4373127e03802ff08ad775d473f75b4cddb2534fc6b"} Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.848786 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-tvm2g" event={"ID":"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f","Type":"ContainerStarted","Data":"bbea5c1bb03d9be28e365fe2d0012f580be2e28bfeb13880a88bb0cd90b44af6"} Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.850000 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xf2lr" event={"ID":"c3bc8c41-0b58-4a15-adf0-698dcbf23806","Type":"ContainerStarted","Data":"d9d3343b6360dd600a7ac6638a5dc0b34467353d6fc864da5380d6b8c65c1541"} Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.851501 4870 generic.go:334] "Generic (PLEG): container finished" podID="0d529f47-d25e-457d-8534-b756432ce6b3" containerID="ef62f6d9d5c4b088d04dbe4a1decbd21e456c7951827047e2edd2bf9f744b324" exitCode=0 Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.851607 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-7r7pv" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.852331 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-7r7pv" event={"ID":"0d529f47-d25e-457d-8534-b756432ce6b3","Type":"ContainerDied","Data":"ef62f6d9d5c4b088d04dbe4a1decbd21e456c7951827047e2edd2bf9f744b324"} Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.852356 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-7r7pv" event={"ID":"0d529f47-d25e-457d-8534-b756432ce6b3","Type":"ContainerDied","Data":"f2faea5fa644f277c96637fa7d42b48bfed95e88bd8d8c356a3b132c9ee6001e"} Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.852372 4870 scope.go:117] "RemoveContainer" containerID="ef62f6d9d5c4b088d04dbe4a1decbd21e456c7951827047e2edd2bf9f744b324" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.856903 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-b8jsv" event={"ID":"a6713b2c-65e7-42c3-8cdc-4ef240f57ee1","Type":"ContainerStarted","Data":"d7896e2a639a461ca6cd165fb3c392b3a5764cbdf0200b28b2847f19ee248b54"} Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.856939 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-b8jsv" event={"ID":"a6713b2c-65e7-42c3-8cdc-4ef240f57ee1","Type":"ContainerStarted","Data":"ebeae8b6679fd85f75f6d61588887121d46523f1dcc2aa10c0412173ab604cf7"} Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.862552 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d529f47-d25e-457d-8534-b756432ce6b3-config" (OuterVolumeSpecName: "config") pod "0d529f47-d25e-457d-8534-b756432ce6b3" (UID: "0d529f47-d25e-457d-8534-b756432ce6b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.869879 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-b8jsv" podStartSLOduration=2.869858619 podStartE2EDuration="2.869858619s" podCreationTimestamp="2026-02-16 17:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:19:38.868502201 +0000 UTC m=+1183.351966595" watchObservedRunningTime="2026-02-16 17:19:38.869858619 +0000 UTC m=+1183.353323003" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.878907 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d529f47-d25e-457d-8534-b756432ce6b3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0d529f47-d25e-457d-8534-b756432ce6b3" (UID: "0d529f47-d25e-457d-8534-b756432ce6b3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.880522 4870 scope.go:117] "RemoveContainer" containerID="ed9a5ffde5e59c41fb6020925585e0c6fc03bc47119fa661bd30a8ad73db507a" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.914120 4870 scope.go:117] "RemoveContainer" containerID="ef62f6d9d5c4b088d04dbe4a1decbd21e456c7951827047e2edd2bf9f744b324" Feb 16 17:19:38 crc kubenswrapper[4870]: E0216 17:19:38.914957 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef62f6d9d5c4b088d04dbe4a1decbd21e456c7951827047e2edd2bf9f744b324\": container with ID starting with ef62f6d9d5c4b088d04dbe4a1decbd21e456c7951827047e2edd2bf9f744b324 not found: ID does not exist" containerID="ef62f6d9d5c4b088d04dbe4a1decbd21e456c7951827047e2edd2bf9f744b324" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.915003 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef62f6d9d5c4b088d04dbe4a1decbd21e456c7951827047e2edd2bf9f744b324"} err="failed to get container status \"ef62f6d9d5c4b088d04dbe4a1decbd21e456c7951827047e2edd2bf9f744b324\": rpc error: code = NotFound desc = could not find container \"ef62f6d9d5c4b088d04dbe4a1decbd21e456c7951827047e2edd2bf9f744b324\": container with ID starting with ef62f6d9d5c4b088d04dbe4a1decbd21e456c7951827047e2edd2bf9f744b324 not found: ID does not exist" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.915031 4870 scope.go:117] "RemoveContainer" containerID="ed9a5ffde5e59c41fb6020925585e0c6fc03bc47119fa661bd30a8ad73db507a" Feb 16 17:19:38 crc kubenswrapper[4870]: E0216 17:19:38.915416 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed9a5ffde5e59c41fb6020925585e0c6fc03bc47119fa661bd30a8ad73db507a\": container with ID starting with ed9a5ffde5e59c41fb6020925585e0c6fc03bc47119fa661bd30a8ad73db507a not found: ID does not exist" containerID="ed9a5ffde5e59c41fb6020925585e0c6fc03bc47119fa661bd30a8ad73db507a" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.915446 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9a5ffde5e59c41fb6020925585e0c6fc03bc47119fa661bd30a8ad73db507a"} err="failed to get container status \"ed9a5ffde5e59c41fb6020925585e0c6fc03bc47119fa661bd30a8ad73db507a\": rpc error: code = NotFound desc = could not find container \"ed9a5ffde5e59c41fb6020925585e0c6fc03bc47119fa661bd30a8ad73db507a\": container with ID starting with ed9a5ffde5e59c41fb6020925585e0c6fc03bc47119fa661bd30a8ad73db507a not found: ID does not exist" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.922816 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5f7cff5-8546-44a7-8769-c64a9cf7049d-operator-scripts\") pod \"root-account-create-update-b4r87\" (UID: \"e5f7cff5-8546-44a7-8769-c64a9cf7049d\") " pod="openstack/root-account-create-update-b4r87" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.922972 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q27qf\" (UniqueName: \"kubernetes.io/projected/e5f7cff5-8546-44a7-8769-c64a9cf7049d-kube-api-access-q27qf\") pod \"root-account-create-update-b4r87\" (UID: \"e5f7cff5-8546-44a7-8769-c64a9cf7049d\") " pod="openstack/root-account-create-update-b4r87" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.923055 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d529f47-d25e-457d-8534-b756432ce6b3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.923066 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d529f47-d25e-457d-8534-b756432ce6b3-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.923079 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2kjf\" (UniqueName: \"kubernetes.io/projected/0d529f47-d25e-457d-8534-b756432ce6b3-kube-api-access-c2kjf\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:38 crc kubenswrapper[4870]: I0216 17:19:38.970524 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 16 17:19:39 crc kubenswrapper[4870]: I0216 17:19:39.025154 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5f7cff5-8546-44a7-8769-c64a9cf7049d-operator-scripts\") pod \"root-account-create-update-b4r87\" (UID: \"e5f7cff5-8546-44a7-8769-c64a9cf7049d\") " pod="openstack/root-account-create-update-b4r87" Feb 16 17:19:39 crc kubenswrapper[4870]: I0216 17:19:39.025296 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q27qf\" (UniqueName: \"kubernetes.io/projected/e5f7cff5-8546-44a7-8769-c64a9cf7049d-kube-api-access-q27qf\") pod \"root-account-create-update-b4r87\" (UID: \"e5f7cff5-8546-44a7-8769-c64a9cf7049d\") " pod="openstack/root-account-create-update-b4r87" Feb 16 17:19:39 crc kubenswrapper[4870]: I0216 17:19:39.027331 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5f7cff5-8546-44a7-8769-c64a9cf7049d-operator-scripts\") pod \"root-account-create-update-b4r87\" (UID: \"e5f7cff5-8546-44a7-8769-c64a9cf7049d\") " pod="openstack/root-account-create-update-b4r87" Feb 16 17:19:39 crc kubenswrapper[4870]: I0216 17:19:39.049963 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q27qf\" (UniqueName: \"kubernetes.io/projected/e5f7cff5-8546-44a7-8769-c64a9cf7049d-kube-api-access-q27qf\") pod \"root-account-create-update-b4r87\" (UID: \"e5f7cff5-8546-44a7-8769-c64a9cf7049d\") " pod="openstack/root-account-create-update-b4r87" Feb 16 17:19:39 crc kubenswrapper[4870]: I0216 17:19:39.055983 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b4r87" Feb 16 17:19:39 crc kubenswrapper[4870]: I0216 17:19:39.204681 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-7r7pv"] Feb 16 17:19:39 crc kubenswrapper[4870]: I0216 17:19:39.222759 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-7r7pv"] Feb 16 17:19:39 crc kubenswrapper[4870]: I0216 17:19:39.573220 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-b4r87"] Feb 16 17:19:39 crc kubenswrapper[4870]: I0216 17:19:39.873598 4870 generic.go:334] "Generic (PLEG): container finished" podID="f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f" containerID="6122b19332d81eb41711eab69d330c045b50b8e2847f7efa2f7d366cc3a89f89" exitCode=0 Feb 16 17:19:39 crc kubenswrapper[4870]: I0216 17:19:39.873758 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-tvm2g" event={"ID":"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f","Type":"ContainerDied","Data":"6122b19332d81eb41711eab69d330c045b50b8e2847f7efa2f7d366cc3a89f89"} Feb 16 17:19:39 crc kubenswrapper[4870]: I0216 17:19:39.876320 4870 generic.go:334] "Generic (PLEG): container finished" podID="c3bc8c41-0b58-4a15-adf0-698dcbf23806" containerID="39303fef85f7be6111427c24631a5cc06c369fa79aafa84d12499587fab2cda3" exitCode=0 Feb 16 17:19:39 crc kubenswrapper[4870]: I0216 17:19:39.876350 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xf2lr" event={"ID":"c3bc8c41-0b58-4a15-adf0-698dcbf23806","Type":"ContainerDied","Data":"39303fef85f7be6111427c24631a5cc06c369fa79aafa84d12499587fab2cda3"} Feb 16 17:19:39 crc kubenswrapper[4870]: I0216 17:19:39.881675 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b4r87" event={"ID":"e5f7cff5-8546-44a7-8769-c64a9cf7049d","Type":"ContainerStarted","Data":"13e59196c4de8011a0894313fc0a6648db91c5dd231488efb7386986259de35d"} Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.221851 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-cd7qj"] Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.223982 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cd7qj" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.246810 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d529f47-d25e-457d-8534-b756432ce6b3" path="/var/lib/kubelet/pods/0d529f47-d25e-457d-8534-b756432ce6b3/volumes" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.247410 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cd7qj"] Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.328512 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7b31-account-create-update-ktdzc"] Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.329932 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7b31-account-create-update-ktdzc" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.332747 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.341865 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7b31-account-create-update-ktdzc"] Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.349547 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/359ff9c1-712f-4a98-b617-c94a4f7a1843-operator-scripts\") pod \"glance-db-create-cd7qj\" (UID: \"359ff9c1-712f-4a98-b617-c94a4f7a1843\") " pod="openstack/glance-db-create-cd7qj" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.349754 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlw6b\" (UniqueName: \"kubernetes.io/projected/359ff9c1-712f-4a98-b617-c94a4f7a1843-kube-api-access-jlw6b\") pod \"glance-db-create-cd7qj\" (UID: \"359ff9c1-712f-4a98-b617-c94a4f7a1843\") " pod="openstack/glance-db-create-cd7qj" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.451742 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlw6b\" (UniqueName: \"kubernetes.io/projected/359ff9c1-712f-4a98-b617-c94a4f7a1843-kube-api-access-jlw6b\") pod \"glance-db-create-cd7qj\" (UID: \"359ff9c1-712f-4a98-b617-c94a4f7a1843\") " pod="openstack/glance-db-create-cd7qj" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.452228 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/359ff9c1-712f-4a98-b617-c94a4f7a1843-operator-scripts\") pod \"glance-db-create-cd7qj\" (UID: \"359ff9c1-712f-4a98-b617-c94a4f7a1843\") " pod="openstack/glance-db-create-cd7qj" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.453225 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/359ff9c1-712f-4a98-b617-c94a4f7a1843-operator-scripts\") pod \"glance-db-create-cd7qj\" (UID: \"359ff9c1-712f-4a98-b617-c94a4f7a1843\") " pod="openstack/glance-db-create-cd7qj" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.453471 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxb2r\" (UniqueName: \"kubernetes.io/projected/3e18a14e-1e9e-44d9-8ed9-a93214973da3-kube-api-access-rxb2r\") pod \"glance-7b31-account-create-update-ktdzc\" (UID: \"3e18a14e-1e9e-44d9-8ed9-a93214973da3\") " pod="openstack/glance-7b31-account-create-update-ktdzc" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.453638 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e18a14e-1e9e-44d9-8ed9-a93214973da3-operator-scripts\") pod \"glance-7b31-account-create-update-ktdzc\" (UID: \"3e18a14e-1e9e-44d9-8ed9-a93214973da3\") " pod="openstack/glance-7b31-account-create-update-ktdzc" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.474611 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlw6b\" (UniqueName: \"kubernetes.io/projected/359ff9c1-712f-4a98-b617-c94a4f7a1843-kube-api-access-jlw6b\") pod \"glance-db-create-cd7qj\" (UID: \"359ff9c1-712f-4a98-b617-c94a4f7a1843\") " pod="openstack/glance-db-create-cd7qj" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.550442 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cd7qj" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.555956 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxb2r\" (UniqueName: \"kubernetes.io/projected/3e18a14e-1e9e-44d9-8ed9-a93214973da3-kube-api-access-rxb2r\") pod \"glance-7b31-account-create-update-ktdzc\" (UID: \"3e18a14e-1e9e-44d9-8ed9-a93214973da3\") " pod="openstack/glance-7b31-account-create-update-ktdzc" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.556027 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e18a14e-1e9e-44d9-8ed9-a93214973da3-operator-scripts\") pod \"glance-7b31-account-create-update-ktdzc\" (UID: \"3e18a14e-1e9e-44d9-8ed9-a93214973da3\") " pod="openstack/glance-7b31-account-create-update-ktdzc" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.556779 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e18a14e-1e9e-44d9-8ed9-a93214973da3-operator-scripts\") pod \"glance-7b31-account-create-update-ktdzc\" (UID: \"3e18a14e-1e9e-44d9-8ed9-a93214973da3\") " pod="openstack/glance-7b31-account-create-update-ktdzc" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.573679 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxb2r\" (UniqueName: \"kubernetes.io/projected/3e18a14e-1e9e-44d9-8ed9-a93214973da3-kube-api-access-rxb2r\") pod \"glance-7b31-account-create-update-ktdzc\" (UID: \"3e18a14e-1e9e-44d9-8ed9-a93214973da3\") " pod="openstack/glance-7b31-account-create-update-ktdzc" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.656028 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-tvm2g" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.658545 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7b31-account-create-update-ktdzc" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.764061 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-config\") pod \"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f\" (UID: \"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f\") " Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.764141 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-dns-svc\") pod \"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f\" (UID: \"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f\") " Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.764415 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l49hg\" (UniqueName: \"kubernetes.io/projected/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-kube-api-access-l49hg\") pod \"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f\" (UID: \"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f\") " Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.764442 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-ovsdbserver-sb\") pod \"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f\" (UID: \"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f\") " Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.767548 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-kube-api-access-l49hg" (OuterVolumeSpecName: "kube-api-access-l49hg") pod "f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f" (UID: "f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f"). InnerVolumeSpecName "kube-api-access-l49hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.788535 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-config" (OuterVolumeSpecName: "config") pod "f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f" (UID: "f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.790038 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f" (UID: "f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.803751 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f" (UID: "f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.866228 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.866260 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l49hg\" (UniqueName: \"kubernetes.io/projected/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-kube-api-access-l49hg\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.866270 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.866278 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.897710 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-tvm2g" event={"ID":"f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f","Type":"ContainerDied","Data":"bbea5c1bb03d9be28e365fe2d0012f580be2e28bfeb13880a88bb0cd90b44af6"} Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.898803 4870 scope.go:117] "RemoveContainer" containerID="6122b19332d81eb41711eab69d330c045b50b8e2847f7efa2f7d366cc3a89f89" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.897804 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-tvm2g" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.910196 4870 generic.go:334] "Generic (PLEG): container finished" podID="e5f7cff5-8546-44a7-8769-c64a9cf7049d" containerID="6fdf539a948da90ac3342d234a7d1aadab12d61bccaf87a1f85aa7b9d53b518a" exitCode=0 Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.910287 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b4r87" event={"ID":"e5f7cff5-8546-44a7-8769-c64a9cf7049d","Type":"ContainerDied","Data":"6fdf539a948da90ac3342d234a7d1aadab12d61bccaf87a1f85aa7b9d53b518a"} Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.912329 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"79e9de5e-117f-4d5e-bfee-bad481a8c0b8","Type":"ContainerStarted","Data":"ab2559ffe68ca99e98b09c33220f8c62dcf39eee2e3881bd62e5e5d1e0e15ba2"} Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.912595 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:40 crc kubenswrapper[4870]: I0216 17:19:40.951590 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=19.525460495 podStartE2EDuration="38.951562316s" podCreationTimestamp="2026-02-16 17:19:02 +0000 UTC" firstStartedPulling="2026-02-16 17:19:18.511425967 +0000 UTC m=+1162.994890351" lastFinishedPulling="2026-02-16 17:19:37.937527788 +0000 UTC m=+1182.420992172" observedRunningTime="2026-02-16 17:19:40.946166124 +0000 UTC m=+1185.429630528" watchObservedRunningTime="2026-02-16 17:19:40.951562316 +0000 UTC m=+1185.435026700" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.026840 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-tvm2g"] Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.035809 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-tvm2g"] Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.157285 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-wp94p"] Feb 16 17:19:41 crc kubenswrapper[4870]: E0216 17:19:41.158031 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f" containerName="init" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.158053 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f" containerName="init" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.158236 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f" containerName="init" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.159007 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wp94p" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.165067 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wp94p"] Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.258935 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-8cad-account-create-update-7qqnt"] Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.260515 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8cad-account-create-update-7qqnt" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.263045 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.270122 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8cad-account-create-update-7qqnt"] Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.273857 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-etc-swift\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.274131 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps2vc\" (UniqueName: \"kubernetes.io/projected/bcfc139e-ad4c-4214-9403-73634951cd57-kube-api-access-ps2vc\") pod \"keystone-db-create-wp94p\" (UID: \"bcfc139e-ad4c-4214-9403-73634951cd57\") " pod="openstack/keystone-db-create-wp94p" Feb 16 17:19:41 crc kubenswrapper[4870]: E0216 17:19:41.274269 4870 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 17:19:41 crc kubenswrapper[4870]: E0216 17:19:41.274300 4870 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 17:19:41 crc kubenswrapper[4870]: E0216 17:19:41.274354 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-etc-swift podName:669a24d2-3e17-4ce1-aba2-c45d2a92683a nodeName:}" failed. No retries permitted until 2026-02-16 17:19:49.274335367 +0000 UTC m=+1193.757799751 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-etc-swift") pod "swift-storage-0" (UID: "669a24d2-3e17-4ce1-aba2-c45d2a92683a") : configmap "swift-ring-files" not found Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.274282 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcfc139e-ad4c-4214-9403-73634951cd57-operator-scripts\") pod \"keystone-db-create-wp94p\" (UID: \"bcfc139e-ad4c-4214-9403-73634951cd57\") " pod="openstack/keystone-db-create-wp94p" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.376640 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vcbq\" (UniqueName: \"kubernetes.io/projected/661affca-3ccf-42b7-9095-eb1dbd2e38fb-kube-api-access-8vcbq\") pod \"keystone-8cad-account-create-update-7qqnt\" (UID: \"661affca-3ccf-42b7-9095-eb1dbd2e38fb\") " pod="openstack/keystone-8cad-account-create-update-7qqnt" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.376696 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps2vc\" (UniqueName: \"kubernetes.io/projected/bcfc139e-ad4c-4214-9403-73634951cd57-kube-api-access-ps2vc\") pod \"keystone-db-create-wp94p\" (UID: \"bcfc139e-ad4c-4214-9403-73634951cd57\") " pod="openstack/keystone-db-create-wp94p" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.376714 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/661affca-3ccf-42b7-9095-eb1dbd2e38fb-operator-scripts\") pod \"keystone-8cad-account-create-update-7qqnt\" (UID: \"661affca-3ccf-42b7-9095-eb1dbd2e38fb\") " pod="openstack/keystone-8cad-account-create-update-7qqnt" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.376795 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcfc139e-ad4c-4214-9403-73634951cd57-operator-scripts\") pod \"keystone-db-create-wp94p\" (UID: \"bcfc139e-ad4c-4214-9403-73634951cd57\") " pod="openstack/keystone-db-create-wp94p" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.377554 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcfc139e-ad4c-4214-9403-73634951cd57-operator-scripts\") pod \"keystone-db-create-wp94p\" (UID: \"bcfc139e-ad4c-4214-9403-73634951cd57\") " pod="openstack/keystone-db-create-wp94p" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.396029 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps2vc\" (UniqueName: \"kubernetes.io/projected/bcfc139e-ad4c-4214-9403-73634951cd57-kube-api-access-ps2vc\") pod \"keystone-db-create-wp94p\" (UID: \"bcfc139e-ad4c-4214-9403-73634951cd57\") " pod="openstack/keystone-db-create-wp94p" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.466023 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-x7s6z"] Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.467742 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x7s6z" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.479540 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vcbq\" (UniqueName: \"kubernetes.io/projected/661affca-3ccf-42b7-9095-eb1dbd2e38fb-kube-api-access-8vcbq\") pod \"keystone-8cad-account-create-update-7qqnt\" (UID: \"661affca-3ccf-42b7-9095-eb1dbd2e38fb\") " pod="openstack/keystone-8cad-account-create-update-7qqnt" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.479593 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/661affca-3ccf-42b7-9095-eb1dbd2e38fb-operator-scripts\") pod \"keystone-8cad-account-create-update-7qqnt\" (UID: \"661affca-3ccf-42b7-9095-eb1dbd2e38fb\") " pod="openstack/keystone-8cad-account-create-update-7qqnt" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.481181 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/661affca-3ccf-42b7-9095-eb1dbd2e38fb-operator-scripts\") pod \"keystone-8cad-account-create-update-7qqnt\" (UID: \"661affca-3ccf-42b7-9095-eb1dbd2e38fb\") " pod="openstack/keystone-8cad-account-create-update-7qqnt" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.483517 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wp94p" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.483680 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-2f89-account-create-update-hx5vd"] Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.486521 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2f89-account-create-update-hx5vd" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.490365 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.499613 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-x7s6z"] Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.501396 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vcbq\" (UniqueName: \"kubernetes.io/projected/661affca-3ccf-42b7-9095-eb1dbd2e38fb-kube-api-access-8vcbq\") pod \"keystone-8cad-account-create-update-7qqnt\" (UID: \"661affca-3ccf-42b7-9095-eb1dbd2e38fb\") " pod="openstack/keystone-8cad-account-create-update-7qqnt" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.506533 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2f89-account-create-update-hx5vd"] Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.579525 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8cad-account-create-update-7qqnt" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.581041 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6500ae3-d348-4473-81db-c795446ba15d-operator-scripts\") pod \"placement-2f89-account-create-update-hx5vd\" (UID: \"e6500ae3-d348-4473-81db-c795446ba15d\") " pod="openstack/placement-2f89-account-create-update-hx5vd" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.581222 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c2c95a8-3204-4882-b77c-4f09f82f9b14-operator-scripts\") pod \"placement-db-create-x7s6z\" (UID: \"5c2c95a8-3204-4882-b77c-4f09f82f9b14\") " pod="openstack/placement-db-create-x7s6z" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.581632 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td7gb\" (UniqueName: \"kubernetes.io/projected/e6500ae3-d348-4473-81db-c795446ba15d-kube-api-access-td7gb\") pod \"placement-2f89-account-create-update-hx5vd\" (UID: \"e6500ae3-d348-4473-81db-c795446ba15d\") " pod="openstack/placement-2f89-account-create-update-hx5vd" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.581732 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmwgr\" (UniqueName: \"kubernetes.io/projected/5c2c95a8-3204-4882-b77c-4f09f82f9b14-kube-api-access-vmwgr\") pod \"placement-db-create-x7s6z\" (UID: \"5c2c95a8-3204-4882-b77c-4f09f82f9b14\") " pod="openstack/placement-db-create-x7s6z" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.683329 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6500ae3-d348-4473-81db-c795446ba15d-operator-scripts\") pod \"placement-2f89-account-create-update-hx5vd\" (UID: \"e6500ae3-d348-4473-81db-c795446ba15d\") " pod="openstack/placement-2f89-account-create-update-hx5vd" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.683545 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c2c95a8-3204-4882-b77c-4f09f82f9b14-operator-scripts\") pod \"placement-db-create-x7s6z\" (UID: \"5c2c95a8-3204-4882-b77c-4f09f82f9b14\") " pod="openstack/placement-db-create-x7s6z" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.683576 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td7gb\" (UniqueName: \"kubernetes.io/projected/e6500ae3-d348-4473-81db-c795446ba15d-kube-api-access-td7gb\") pod \"placement-2f89-account-create-update-hx5vd\" (UID: \"e6500ae3-d348-4473-81db-c795446ba15d\") " pod="openstack/placement-2f89-account-create-update-hx5vd" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.683613 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmwgr\" (UniqueName: \"kubernetes.io/projected/5c2c95a8-3204-4882-b77c-4f09f82f9b14-kube-api-access-vmwgr\") pod \"placement-db-create-x7s6z\" (UID: \"5c2c95a8-3204-4882-b77c-4f09f82f9b14\") " pod="openstack/placement-db-create-x7s6z" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.685146 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6500ae3-d348-4473-81db-c795446ba15d-operator-scripts\") pod \"placement-2f89-account-create-update-hx5vd\" (UID: \"e6500ae3-d348-4473-81db-c795446ba15d\") " pod="openstack/placement-2f89-account-create-update-hx5vd" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.685355 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c2c95a8-3204-4882-b77c-4f09f82f9b14-operator-scripts\") pod \"placement-db-create-x7s6z\" (UID: \"5c2c95a8-3204-4882-b77c-4f09f82f9b14\") " pod="openstack/placement-db-create-x7s6z" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.712712 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmwgr\" (UniqueName: \"kubernetes.io/projected/5c2c95a8-3204-4882-b77c-4f09f82f9b14-kube-api-access-vmwgr\") pod \"placement-db-create-x7s6z\" (UID: \"5c2c95a8-3204-4882-b77c-4f09f82f9b14\") " pod="openstack/placement-db-create-x7s6z" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.716898 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td7gb\" (UniqueName: \"kubernetes.io/projected/e6500ae3-d348-4473-81db-c795446ba15d-kube-api-access-td7gb\") pod \"placement-2f89-account-create-update-hx5vd\" (UID: \"e6500ae3-d348-4473-81db-c795446ba15d\") " pod="openstack/placement-2f89-account-create-update-hx5vd" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.859398 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x7s6z" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.866293 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2f89-account-create-update-hx5vd" Feb 16 17:19:41 crc kubenswrapper[4870]: I0216 17:19:41.930994 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 16 17:19:42 crc kubenswrapper[4870]: I0216 17:19:42.263023 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f" path="/var/lib/kubelet/pods/f34b3669-c6da-44d5-9a9c-a0b7c4e7cc8f/volumes" Feb 16 17:19:42 crc kubenswrapper[4870]: I0216 17:19:42.278908 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 17:19:44 crc kubenswrapper[4870]: I0216 17:19:44.100335 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="d158e8d5-206e-4289-a1e5-247fddf29a11" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 17:19:44 crc kubenswrapper[4870]: I0216 17:19:44.235433 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 17:19:44 crc kubenswrapper[4870]: I0216 17:19:44.280421 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.328734 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b4r87" Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.383787 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5f7cff5-8546-44a7-8769-c64a9cf7049d-operator-scripts\") pod \"e5f7cff5-8546-44a7-8769-c64a9cf7049d\" (UID: \"e5f7cff5-8546-44a7-8769-c64a9cf7049d\") " Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.383879 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q27qf\" (UniqueName: \"kubernetes.io/projected/e5f7cff5-8546-44a7-8769-c64a9cf7049d-kube-api-access-q27qf\") pod \"e5f7cff5-8546-44a7-8769-c64a9cf7049d\" (UID: \"e5f7cff5-8546-44a7-8769-c64a9cf7049d\") " Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.385785 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5f7cff5-8546-44a7-8769-c64a9cf7049d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5f7cff5-8546-44a7-8769-c64a9cf7049d" (UID: "e5f7cff5-8546-44a7-8769-c64a9cf7049d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.398257 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5f7cff5-8546-44a7-8769-c64a9cf7049d-kube-api-access-q27qf" (OuterVolumeSpecName: "kube-api-access-q27qf") pod "e5f7cff5-8546-44a7-8769-c64a9cf7049d" (UID: "e5f7cff5-8546-44a7-8769-c64a9cf7049d"). InnerVolumeSpecName "kube-api-access-q27qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.485897 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5f7cff5-8546-44a7-8769-c64a9cf7049d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.486213 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q27qf\" (UniqueName: \"kubernetes.io/projected/e5f7cff5-8546-44a7-8769-c64a9cf7049d-kube-api-access-q27qf\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.755593 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cd7qj"] Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.776696 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8cad-account-create-update-7qqnt"] Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.790108 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7b31-account-create-update-ktdzc"] Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.798718 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-x7s6z"] Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.976235 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cd7qj" event={"ID":"359ff9c1-712f-4a98-b617-c94a4f7a1843","Type":"ContainerStarted","Data":"175b2beea5cbfacea0fc5a1b61a865b288aac569650240f3654fa21897fcd65f"} Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.978531 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cd7qj" event={"ID":"359ff9c1-712f-4a98-b617-c94a4f7a1843","Type":"ContainerStarted","Data":"e53a3ed1842234ab77d0bc31266d804992af32247babdd6db3f977455b4f1d81"} Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.978805 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x7s6z" event={"ID":"5c2c95a8-3204-4882-b77c-4f09f82f9b14","Type":"ContainerStarted","Data":"e397bc7c78b54dbb9dcf6a4e6f04d3b0347e393e86dbc83335cb949a19eab354"} Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.981475 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xf2lr" event={"ID":"c3bc8c41-0b58-4a15-adf0-698dcbf23806","Type":"ContainerStarted","Data":"3902d0a093480de91f748b31235dbd0b8acbd02d2b81fb835eb91a91a66388b0"} Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.981898 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-xf2lr" Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.984505 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b4r87" Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.984500 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b4r87" event={"ID":"e5f7cff5-8546-44a7-8769-c64a9cf7049d","Type":"ContainerDied","Data":"13e59196c4de8011a0894313fc0a6648db91c5dd231488efb7386986259de35d"} Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.984615 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13e59196c4de8011a0894313fc0a6648db91c5dd231488efb7386986259de35d" Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.986323 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9b2d9aae-f384-4c40-adfb-35224530b735","Type":"ContainerStarted","Data":"f9ccbc231c9f2eb6685389a63eb33fb1406a048523c1fa012a821bfb66687452"} Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.986350 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9b2d9aae-f384-4c40-adfb-35224530b735","Type":"ContainerStarted","Data":"ccd6e45f21332c3e86df9f3c2169dd14e7f1af4bf0dd6b8ede9d1dda38c8f07f"} Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.987213 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.988676 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8cad-account-create-update-7qqnt" event={"ID":"661affca-3ccf-42b7-9095-eb1dbd2e38fb","Type":"ContainerStarted","Data":"f31fc7f985f1271058f70b33cabb659925957905d71685cbec6ddc2c4bfa0dce"} Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.991447 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7b31-account-create-update-ktdzc" event={"ID":"3e18a14e-1e9e-44d9-8ed9-a93214973da3","Type":"ContainerStarted","Data":"75b250693fba8a1e5952b4611ba5c2459cbbc643604ae076874fb0842c520208"} Feb 16 17:19:46 crc kubenswrapper[4870]: I0216 17:19:46.995279 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gnnq2" event={"ID":"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa","Type":"ContainerStarted","Data":"af6b227d7b7b03f2911132c3978f36c7186192e7f004d242945d6bf5393a016e"} Feb 16 17:19:47 crc kubenswrapper[4870]: I0216 17:19:47.002713 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"523f77f6-c829-4d3d-99c1-45bafcb30ee3","Type":"ContainerStarted","Data":"f8ffa535d7520af1c4e297425d560fd1a7020c5dc0ea84661efce87959918cc2"} Feb 16 17:19:47 crc kubenswrapper[4870]: I0216 17:19:47.003188 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-cd7qj" podStartSLOduration=7.003168065 podStartE2EDuration="7.003168065s" podCreationTimestamp="2026-02-16 17:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:19:46.996996041 +0000 UTC m=+1191.480460635" watchObservedRunningTime="2026-02-16 17:19:47.003168065 +0000 UTC m=+1191.486632449" Feb 16 17:19:47 crc kubenswrapper[4870]: I0216 17:19:47.020646 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2f89-account-create-update-hx5vd"] Feb 16 17:19:47 crc kubenswrapper[4870]: I0216 17:19:47.031285 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wp94p"] Feb 16 17:19:47 crc kubenswrapper[4870]: I0216 17:19:47.036180 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.44232938 podStartE2EDuration="10.036151113s" podCreationTimestamp="2026-02-16 17:19:37 +0000 UTC" firstStartedPulling="2026-02-16 17:19:38.81586764 +0000 UTC m=+1183.299332024" lastFinishedPulling="2026-02-16 17:19:45.409689373 +0000 UTC m=+1189.893153757" observedRunningTime="2026-02-16 17:19:47.022716705 +0000 UTC m=+1191.506181109" watchObservedRunningTime="2026-02-16 17:19:47.036151113 +0000 UTC m=+1191.519615497" Feb 16 17:19:47 crc kubenswrapper[4870]: I0216 17:19:47.049166 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-xf2lr" podStartSLOduration=11.049140318 podStartE2EDuration="11.049140318s" podCreationTimestamp="2026-02-16 17:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:19:47.040078823 +0000 UTC m=+1191.523543227" watchObservedRunningTime="2026-02-16 17:19:47.049140318 +0000 UTC m=+1191.532604702" Feb 16 17:19:47 crc kubenswrapper[4870]: I0216 17:19:47.057463 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-gnnq2" podStartSLOduration=2.48231857 podStartE2EDuration="10.057439752s" podCreationTimestamp="2026-02-16 17:19:37 +0000 UTC" firstStartedPulling="2026-02-16 17:19:38.587005911 +0000 UTC m=+1183.070470295" lastFinishedPulling="2026-02-16 17:19:46.162127093 +0000 UTC m=+1190.645591477" observedRunningTime="2026-02-16 17:19:47.056630859 +0000 UTC m=+1191.540095243" watchObservedRunningTime="2026-02-16 17:19:47.057439752 +0000 UTC m=+1191.540904136" Feb 16 17:19:47 crc kubenswrapper[4870]: W0216 17:19:47.087321 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6500ae3_d348_4473_81db_c795446ba15d.slice/crio-079e9ddba9cda3595ae111065845e27a221b1f0403b76c68f83344772567b91e WatchSource:0}: Error finding container 079e9ddba9cda3595ae111065845e27a221b1f0403b76c68f83344772567b91e: Status 404 returned error can't find the container with id 079e9ddba9cda3595ae111065845e27a221b1f0403b76c68f83344772567b91e Feb 16 17:19:47 crc kubenswrapper[4870]: W0216 17:19:47.090440 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcfc139e_ad4c_4214_9403_73634951cd57.slice/crio-9c45f8b462bda477c4d8a696a15e35b4085ca84d3ce271672f9b01087ce00851 WatchSource:0}: Error finding container 9c45f8b462bda477c4d8a696a15e35b4085ca84d3ce271672f9b01087ce00851: Status 404 returned error can't find the container with id 9c45f8b462bda477c4d8a696a15e35b4085ca84d3ce271672f9b01087ce00851 Feb 16 17:19:48 crc kubenswrapper[4870]: I0216 17:19:48.015286 4870 generic.go:334] "Generic (PLEG): container finished" podID="3e18a14e-1e9e-44d9-8ed9-a93214973da3" containerID="d8c9beafa329b75546850554f4becaafe7906de071b002443984c51563e267ca" exitCode=0 Feb 16 17:19:48 crc kubenswrapper[4870]: I0216 17:19:48.015350 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7b31-account-create-update-ktdzc" event={"ID":"3e18a14e-1e9e-44d9-8ed9-a93214973da3","Type":"ContainerDied","Data":"d8c9beafa329b75546850554f4becaafe7906de071b002443984c51563e267ca"} Feb 16 17:19:48 crc kubenswrapper[4870]: I0216 17:19:48.018181 4870 generic.go:334] "Generic (PLEG): container finished" podID="bcfc139e-ad4c-4214-9403-73634951cd57" containerID="d57972e0f4f6d967fbbc0cf9eb82f7bb17be32dd07c9d14b89dce22d7b89b024" exitCode=0 Feb 16 17:19:48 crc kubenswrapper[4870]: I0216 17:19:48.018293 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wp94p" event={"ID":"bcfc139e-ad4c-4214-9403-73634951cd57","Type":"ContainerDied","Data":"d57972e0f4f6d967fbbc0cf9eb82f7bb17be32dd07c9d14b89dce22d7b89b024"} Feb 16 17:19:48 crc kubenswrapper[4870]: I0216 17:19:48.018361 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wp94p" event={"ID":"bcfc139e-ad4c-4214-9403-73634951cd57","Type":"ContainerStarted","Data":"9c45f8b462bda477c4d8a696a15e35b4085ca84d3ce271672f9b01087ce00851"} Feb 16 17:19:48 crc kubenswrapper[4870]: I0216 17:19:48.020051 4870 generic.go:334] "Generic (PLEG): container finished" podID="e6500ae3-d348-4473-81db-c795446ba15d" containerID="04b9bcc99d8c879b4193a41375e6dbe7fdd36833f0b0ddc4e5affd10ffcbab60" exitCode=0 Feb 16 17:19:48 crc kubenswrapper[4870]: I0216 17:19:48.020148 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2f89-account-create-update-hx5vd" event={"ID":"e6500ae3-d348-4473-81db-c795446ba15d","Type":"ContainerDied","Data":"04b9bcc99d8c879b4193a41375e6dbe7fdd36833f0b0ddc4e5affd10ffcbab60"} Feb 16 17:19:48 crc kubenswrapper[4870]: I0216 17:19:48.020184 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2f89-account-create-update-hx5vd" event={"ID":"e6500ae3-d348-4473-81db-c795446ba15d","Type":"ContainerStarted","Data":"079e9ddba9cda3595ae111065845e27a221b1f0403b76c68f83344772567b91e"} Feb 16 17:19:48 crc kubenswrapper[4870]: I0216 17:19:48.022103 4870 generic.go:334] "Generic (PLEG): container finished" podID="661affca-3ccf-42b7-9095-eb1dbd2e38fb" containerID="a5a3201f0149777fdd0d4553d617da8b73a47f363635814d205deb8705048357" exitCode=0 Feb 16 17:19:48 crc kubenswrapper[4870]: I0216 17:19:48.022193 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8cad-account-create-update-7qqnt" event={"ID":"661affca-3ccf-42b7-9095-eb1dbd2e38fb","Type":"ContainerDied","Data":"a5a3201f0149777fdd0d4553d617da8b73a47f363635814d205deb8705048357"} Feb 16 17:19:48 crc kubenswrapper[4870]: I0216 17:19:48.024247 4870 generic.go:334] "Generic (PLEG): container finished" podID="359ff9c1-712f-4a98-b617-c94a4f7a1843" containerID="175b2beea5cbfacea0fc5a1b61a865b288aac569650240f3654fa21897fcd65f" exitCode=0 Feb 16 17:19:48 crc kubenswrapper[4870]: I0216 17:19:48.024314 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cd7qj" event={"ID":"359ff9c1-712f-4a98-b617-c94a4f7a1843","Type":"ContainerDied","Data":"175b2beea5cbfacea0fc5a1b61a865b288aac569650240f3654fa21897fcd65f"} Feb 16 17:19:48 crc kubenswrapper[4870]: I0216 17:19:48.026050 4870 generic.go:334] "Generic (PLEG): container finished" podID="5c2c95a8-3204-4882-b77c-4f09f82f9b14" containerID="ce534793383eb4cbbb9c777a22406bfc631b2ab811d5abfaefb68c6ca5d94d6d" exitCode=0 Feb 16 17:19:48 crc kubenswrapper[4870]: I0216 17:19:48.026178 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x7s6z" event={"ID":"5c2c95a8-3204-4882-b77c-4f09f82f9b14","Type":"ContainerDied","Data":"ce534793383eb4cbbb9c777a22406bfc631b2ab811d5abfaefb68c6ca5d94d6d"} Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.348722 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-etc-swift\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:19:49 crc kubenswrapper[4870]: E0216 17:19:49.348994 4870 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 17:19:49 crc kubenswrapper[4870]: E0216 17:19:49.349010 4870 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 17:19:49 crc kubenswrapper[4870]: E0216 17:19:49.349060 4870 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-etc-swift podName:669a24d2-3e17-4ce1-aba2-c45d2a92683a nodeName:}" failed. No retries permitted until 2026-02-16 17:20:05.349044424 +0000 UTC m=+1209.832508808 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-etc-swift") pod "swift-storage-0" (UID: "669a24d2-3e17-4ce1-aba2-c45d2a92683a") : configmap "swift-ring-files" not found Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.506742 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7b31-account-create-update-ktdzc" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.552080 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e18a14e-1e9e-44d9-8ed9-a93214973da3-operator-scripts\") pod \"3e18a14e-1e9e-44d9-8ed9-a93214973da3\" (UID: \"3e18a14e-1e9e-44d9-8ed9-a93214973da3\") " Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.552259 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxb2r\" (UniqueName: \"kubernetes.io/projected/3e18a14e-1e9e-44d9-8ed9-a93214973da3-kube-api-access-rxb2r\") pod \"3e18a14e-1e9e-44d9-8ed9-a93214973da3\" (UID: \"3e18a14e-1e9e-44d9-8ed9-a93214973da3\") " Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.553232 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e18a14e-1e9e-44d9-8ed9-a93214973da3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3e18a14e-1e9e-44d9-8ed9-a93214973da3" (UID: "3e18a14e-1e9e-44d9-8ed9-a93214973da3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.557899 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e18a14e-1e9e-44d9-8ed9-a93214973da3-kube-api-access-rxb2r" (OuterVolumeSpecName: "kube-api-access-rxb2r") pod "3e18a14e-1e9e-44d9-8ed9-a93214973da3" (UID: "3e18a14e-1e9e-44d9-8ed9-a93214973da3"). InnerVolumeSpecName "kube-api-access-rxb2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.657866 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxb2r\" (UniqueName: \"kubernetes.io/projected/3e18a14e-1e9e-44d9-8ed9-a93214973da3-kube-api-access-rxb2r\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.657896 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e18a14e-1e9e-44d9-8ed9-a93214973da3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.687642 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8cad-account-create-update-7qqnt" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.759184 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vcbq\" (UniqueName: \"kubernetes.io/projected/661affca-3ccf-42b7-9095-eb1dbd2e38fb-kube-api-access-8vcbq\") pod \"661affca-3ccf-42b7-9095-eb1dbd2e38fb\" (UID: \"661affca-3ccf-42b7-9095-eb1dbd2e38fb\") " Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.759354 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/661affca-3ccf-42b7-9095-eb1dbd2e38fb-operator-scripts\") pod \"661affca-3ccf-42b7-9095-eb1dbd2e38fb\" (UID: \"661affca-3ccf-42b7-9095-eb1dbd2e38fb\") " Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.760037 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/661affca-3ccf-42b7-9095-eb1dbd2e38fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "661affca-3ccf-42b7-9095-eb1dbd2e38fb" (UID: "661affca-3ccf-42b7-9095-eb1dbd2e38fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.761751 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/661affca-3ccf-42b7-9095-eb1dbd2e38fb-kube-api-access-8vcbq" (OuterVolumeSpecName: "kube-api-access-8vcbq") pod "661affca-3ccf-42b7-9095-eb1dbd2e38fb" (UID: "661affca-3ccf-42b7-9095-eb1dbd2e38fb"). InnerVolumeSpecName "kube-api-access-8vcbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.811666 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x7s6z" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.828085 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cd7qj" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.838698 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wp94p" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.853203 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2f89-account-create-update-hx5vd" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.860421 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/359ff9c1-712f-4a98-b617-c94a4f7a1843-operator-scripts\") pod \"359ff9c1-712f-4a98-b617-c94a4f7a1843\" (UID: \"359ff9c1-712f-4a98-b617-c94a4f7a1843\") " Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.860490 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcfc139e-ad4c-4214-9403-73634951cd57-operator-scripts\") pod \"bcfc139e-ad4c-4214-9403-73634951cd57\" (UID: \"bcfc139e-ad4c-4214-9403-73634951cd57\") " Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.860658 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlw6b\" (UniqueName: \"kubernetes.io/projected/359ff9c1-712f-4a98-b617-c94a4f7a1843-kube-api-access-jlw6b\") pod \"359ff9c1-712f-4a98-b617-c94a4f7a1843\" (UID: \"359ff9c1-712f-4a98-b617-c94a4f7a1843\") " Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.860713 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmwgr\" (UniqueName: \"kubernetes.io/projected/5c2c95a8-3204-4882-b77c-4f09f82f9b14-kube-api-access-vmwgr\") pod \"5c2c95a8-3204-4882-b77c-4f09f82f9b14\" (UID: \"5c2c95a8-3204-4882-b77c-4f09f82f9b14\") " Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.860803 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td7gb\" (UniqueName: \"kubernetes.io/projected/e6500ae3-d348-4473-81db-c795446ba15d-kube-api-access-td7gb\") pod \"e6500ae3-d348-4473-81db-c795446ba15d\" (UID: \"e6500ae3-d348-4473-81db-c795446ba15d\") " Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.860871 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6500ae3-d348-4473-81db-c795446ba15d-operator-scripts\") pod \"e6500ae3-d348-4473-81db-c795446ba15d\" (UID: \"e6500ae3-d348-4473-81db-c795446ba15d\") " Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.860915 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c2c95a8-3204-4882-b77c-4f09f82f9b14-operator-scripts\") pod \"5c2c95a8-3204-4882-b77c-4f09f82f9b14\" (UID: \"5c2c95a8-3204-4882-b77c-4f09f82f9b14\") " Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.860955 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps2vc\" (UniqueName: \"kubernetes.io/projected/bcfc139e-ad4c-4214-9403-73634951cd57-kube-api-access-ps2vc\") pod \"bcfc139e-ad4c-4214-9403-73634951cd57\" (UID: \"bcfc139e-ad4c-4214-9403-73634951cd57\") " Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.861114 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcfc139e-ad4c-4214-9403-73634951cd57-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bcfc139e-ad4c-4214-9403-73634951cd57" (UID: "bcfc139e-ad4c-4214-9403-73634951cd57"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.861180 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/359ff9c1-712f-4a98-b617-c94a4f7a1843-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "359ff9c1-712f-4a98-b617-c94a4f7a1843" (UID: "359ff9c1-712f-4a98-b617-c94a4f7a1843"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.861558 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vcbq\" (UniqueName: \"kubernetes.io/projected/661affca-3ccf-42b7-9095-eb1dbd2e38fb-kube-api-access-8vcbq\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.861583 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/359ff9c1-712f-4a98-b617-c94a4f7a1843-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.861597 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcfc139e-ad4c-4214-9403-73634951cd57-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.861611 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/661affca-3ccf-42b7-9095-eb1dbd2e38fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.861756 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c2c95a8-3204-4882-b77c-4f09f82f9b14-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c2c95a8-3204-4882-b77c-4f09f82f9b14" (UID: "5c2c95a8-3204-4882-b77c-4f09f82f9b14"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.861812 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6500ae3-d348-4473-81db-c795446ba15d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6500ae3-d348-4473-81db-c795446ba15d" (UID: "e6500ae3-d348-4473-81db-c795446ba15d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.866358 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c2c95a8-3204-4882-b77c-4f09f82f9b14-kube-api-access-vmwgr" (OuterVolumeSpecName: "kube-api-access-vmwgr") pod "5c2c95a8-3204-4882-b77c-4f09f82f9b14" (UID: "5c2c95a8-3204-4882-b77c-4f09f82f9b14"). InnerVolumeSpecName "kube-api-access-vmwgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.868458 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/359ff9c1-712f-4a98-b617-c94a4f7a1843-kube-api-access-jlw6b" (OuterVolumeSpecName: "kube-api-access-jlw6b") pod "359ff9c1-712f-4a98-b617-c94a4f7a1843" (UID: "359ff9c1-712f-4a98-b617-c94a4f7a1843"). InnerVolumeSpecName "kube-api-access-jlw6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.868526 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcfc139e-ad4c-4214-9403-73634951cd57-kube-api-access-ps2vc" (OuterVolumeSpecName: "kube-api-access-ps2vc") pod "bcfc139e-ad4c-4214-9403-73634951cd57" (UID: "bcfc139e-ad4c-4214-9403-73634951cd57"). InnerVolumeSpecName "kube-api-access-ps2vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.871139 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6500ae3-d348-4473-81db-c795446ba15d-kube-api-access-td7gb" (OuterVolumeSpecName: "kube-api-access-td7gb") pod "e6500ae3-d348-4473-81db-c795446ba15d" (UID: "e6500ae3-d348-4473-81db-c795446ba15d"). InnerVolumeSpecName "kube-api-access-td7gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.963762 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlw6b\" (UniqueName: \"kubernetes.io/projected/359ff9c1-712f-4a98-b617-c94a4f7a1843-kube-api-access-jlw6b\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.963809 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmwgr\" (UniqueName: \"kubernetes.io/projected/5c2c95a8-3204-4882-b77c-4f09f82f9b14-kube-api-access-vmwgr\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.963821 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td7gb\" (UniqueName: \"kubernetes.io/projected/e6500ae3-d348-4473-81db-c795446ba15d-kube-api-access-td7gb\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.963830 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6500ae3-d348-4473-81db-c795446ba15d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.963839 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c2c95a8-3204-4882-b77c-4f09f82f9b14-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:49 crc kubenswrapper[4870]: I0216 17:19:49.963849 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps2vc\" (UniqueName: \"kubernetes.io/projected/bcfc139e-ad4c-4214-9403-73634951cd57-kube-api-access-ps2vc\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:50 crc kubenswrapper[4870]: I0216 17:19:50.054461 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cd7qj" event={"ID":"359ff9c1-712f-4a98-b617-c94a4f7a1843","Type":"ContainerDied","Data":"e53a3ed1842234ab77d0bc31266d804992af32247babdd6db3f977455b4f1d81"} Feb 16 17:19:50 crc kubenswrapper[4870]: I0216 17:19:50.054498 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e53a3ed1842234ab77d0bc31266d804992af32247babdd6db3f977455b4f1d81" Feb 16 17:19:50 crc kubenswrapper[4870]: I0216 17:19:50.054520 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cd7qj" Feb 16 17:19:50 crc kubenswrapper[4870]: I0216 17:19:50.056132 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x7s6z" Feb 16 17:19:50 crc kubenswrapper[4870]: I0216 17:19:50.056150 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x7s6z" event={"ID":"5c2c95a8-3204-4882-b77c-4f09f82f9b14","Type":"ContainerDied","Data":"e397bc7c78b54dbb9dcf6a4e6f04d3b0347e393e86dbc83335cb949a19eab354"} Feb 16 17:19:50 crc kubenswrapper[4870]: I0216 17:19:50.056190 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e397bc7c78b54dbb9dcf6a4e6f04d3b0347e393e86dbc83335cb949a19eab354" Feb 16 17:19:50 crc kubenswrapper[4870]: I0216 17:19:50.058082 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7b31-account-create-update-ktdzc" event={"ID":"3e18a14e-1e9e-44d9-8ed9-a93214973da3","Type":"ContainerDied","Data":"75b250693fba8a1e5952b4611ba5c2459cbbc643604ae076874fb0842c520208"} Feb 16 17:19:50 crc kubenswrapper[4870]: I0216 17:19:50.058101 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75b250693fba8a1e5952b4611ba5c2459cbbc643604ae076874fb0842c520208" Feb 16 17:19:50 crc kubenswrapper[4870]: I0216 17:19:50.058111 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7b31-account-create-update-ktdzc" Feb 16 17:19:50 crc kubenswrapper[4870]: I0216 17:19:50.076733 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wp94p" event={"ID":"bcfc139e-ad4c-4214-9403-73634951cd57","Type":"ContainerDied","Data":"9c45f8b462bda477c4d8a696a15e35b4085ca84d3ce271672f9b01087ce00851"} Feb 16 17:19:50 crc kubenswrapper[4870]: I0216 17:19:50.076757 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c45f8b462bda477c4d8a696a15e35b4085ca84d3ce271672f9b01087ce00851" Feb 16 17:19:50 crc kubenswrapper[4870]: I0216 17:19:50.076764 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wp94p" Feb 16 17:19:50 crc kubenswrapper[4870]: I0216 17:19:50.079678 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2f89-account-create-update-hx5vd" Feb 16 17:19:50 crc kubenswrapper[4870]: I0216 17:19:50.079697 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2f89-account-create-update-hx5vd" event={"ID":"e6500ae3-d348-4473-81db-c795446ba15d","Type":"ContainerDied","Data":"079e9ddba9cda3595ae111065845e27a221b1f0403b76c68f83344772567b91e"} Feb 16 17:19:50 crc kubenswrapper[4870]: I0216 17:19:50.079731 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="079e9ddba9cda3595ae111065845e27a221b1f0403b76c68f83344772567b91e" Feb 16 17:19:50 crc kubenswrapper[4870]: I0216 17:19:50.082084 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8cad-account-create-update-7qqnt" Feb 16 17:19:50 crc kubenswrapper[4870]: I0216 17:19:50.082676 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8cad-account-create-update-7qqnt" event={"ID":"661affca-3ccf-42b7-9095-eb1dbd2e38fb","Type":"ContainerDied","Data":"f31fc7f985f1271058f70b33cabb659925957905d71685cbec6ddc2c4bfa0dce"} Feb 16 17:19:50 crc kubenswrapper[4870]: I0216 17:19:50.082729 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f31fc7f985f1271058f70b33cabb659925957905d71685cbec6ddc2c4bfa0dce" Feb 16 17:19:50 crc kubenswrapper[4870]: I0216 17:19:50.098065 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"523f77f6-c829-4d3d-99c1-45bafcb30ee3","Type":"ContainerStarted","Data":"6184c5ab8567c650caaca098276056713018801c2c0b86f2e11eca261f48e205"} Feb 16 17:19:51 crc kubenswrapper[4870]: I0216 17:19:51.854656 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-xf2lr" Feb 16 17:19:51 crc kubenswrapper[4870]: I0216 17:19:51.935111 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5qck9"] Feb 16 17:19:51 crc kubenswrapper[4870]: I0216 17:19:51.935402 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" podUID="743bd071-f9bd-4948-b99b-cd3e29bfe49e" containerName="dnsmasq-dns" containerID="cri-o://536345f5604d70bac97e4bcd5628d34a588660ad504e6695e8f1ae9a70143cf8" gracePeriod=10 Feb 16 17:19:52 crc kubenswrapper[4870]: I0216 17:19:52.119974 4870 generic.go:334] "Generic (PLEG): container finished" podID="743bd071-f9bd-4948-b99b-cd3e29bfe49e" containerID="536345f5604d70bac97e4bcd5628d34a588660ad504e6695e8f1ae9a70143cf8" exitCode=0 Feb 16 17:19:52 crc kubenswrapper[4870]: I0216 17:19:52.120020 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" event={"ID":"743bd071-f9bd-4948-b99b-cd3e29bfe49e","Type":"ContainerDied","Data":"536345f5604d70bac97e4bcd5628d34a588660ad504e6695e8f1ae9a70143cf8"} Feb 16 17:19:52 crc kubenswrapper[4870]: I0216 17:19:52.166591 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-b4r87"] Feb 16 17:19:52 crc kubenswrapper[4870]: I0216 17:19:52.190793 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-b4r87"] Feb 16 17:19:52 crc kubenswrapper[4870]: I0216 17:19:52.238462 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5f7cff5-8546-44a7-8769-c64a9cf7049d" path="/var/lib/kubelet/pods/e5f7cff5-8546-44a7-8769-c64a9cf7049d/volumes" Feb 16 17:19:52 crc kubenswrapper[4870]: I0216 17:19:52.488417 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" Feb 16 17:19:52 crc kubenswrapper[4870]: I0216 17:19:52.528783 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/743bd071-f9bd-4948-b99b-cd3e29bfe49e-dns-svc\") pod \"743bd071-f9bd-4948-b99b-cd3e29bfe49e\" (UID: \"743bd071-f9bd-4948-b99b-cd3e29bfe49e\") " Feb 16 17:19:52 crc kubenswrapper[4870]: I0216 17:19:52.529044 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743bd071-f9bd-4948-b99b-cd3e29bfe49e-config\") pod \"743bd071-f9bd-4948-b99b-cd3e29bfe49e\" (UID: \"743bd071-f9bd-4948-b99b-cd3e29bfe49e\") " Feb 16 17:19:52 crc kubenswrapper[4870]: I0216 17:19:52.529109 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pg9x\" (UniqueName: \"kubernetes.io/projected/743bd071-f9bd-4948-b99b-cd3e29bfe49e-kube-api-access-7pg9x\") pod \"743bd071-f9bd-4948-b99b-cd3e29bfe49e\" (UID: \"743bd071-f9bd-4948-b99b-cd3e29bfe49e\") " Feb 16 17:19:52 crc kubenswrapper[4870]: I0216 17:19:52.549281 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/743bd071-f9bd-4948-b99b-cd3e29bfe49e-kube-api-access-7pg9x" (OuterVolumeSpecName: "kube-api-access-7pg9x") pod "743bd071-f9bd-4948-b99b-cd3e29bfe49e" (UID: "743bd071-f9bd-4948-b99b-cd3e29bfe49e"). InnerVolumeSpecName "kube-api-access-7pg9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:19:52 crc kubenswrapper[4870]: I0216 17:19:52.577411 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/743bd071-f9bd-4948-b99b-cd3e29bfe49e-config" (OuterVolumeSpecName: "config") pod "743bd071-f9bd-4948-b99b-cd3e29bfe49e" (UID: "743bd071-f9bd-4948-b99b-cd3e29bfe49e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:19:52 crc kubenswrapper[4870]: I0216 17:19:52.586604 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/743bd071-f9bd-4948-b99b-cd3e29bfe49e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "743bd071-f9bd-4948-b99b-cd3e29bfe49e" (UID: "743bd071-f9bd-4948-b99b-cd3e29bfe49e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:19:52 crc kubenswrapper[4870]: I0216 17:19:52.631718 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pg9x\" (UniqueName: \"kubernetes.io/projected/743bd071-f9bd-4948-b99b-cd3e29bfe49e-kube-api-access-7pg9x\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:52 crc kubenswrapper[4870]: I0216 17:19:52.631752 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/743bd071-f9bd-4948-b99b-cd3e29bfe49e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:52 crc kubenswrapper[4870]: I0216 17:19:52.631763 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743bd071-f9bd-4948-b99b-cd3e29bfe49e-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:52 crc kubenswrapper[4870]: I0216 17:19:52.945676 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-547gr" Feb 16 17:19:53 crc kubenswrapper[4870]: I0216 17:19:53.117797 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-bgr8z" Feb 16 17:19:53 crc kubenswrapper[4870]: I0216 17:19:53.155669 4870 generic.go:334] "Generic (PLEG): container finished" podID="66aba020-76f1-4cf7-992b-0745bd3c3512" containerID="2a0aeb2e000109dda624a19f547d11b30ddb6a33113ab5239ab656d4cc73f800" exitCode=0 Feb 16 17:19:53 crc kubenswrapper[4870]: I0216 17:19:53.155760 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"66aba020-76f1-4cf7-992b-0745bd3c3512","Type":"ContainerDied","Data":"2a0aeb2e000109dda624a19f547d11b30ddb6a33113ab5239ab656d4cc73f800"} Feb 16 17:19:53 crc kubenswrapper[4870]: I0216 17:19:53.172611 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" event={"ID":"743bd071-f9bd-4948-b99b-cd3e29bfe49e","Type":"ContainerDied","Data":"bdb647931bf1a12d87fd4d6371a13368f747a2b522017b14ccdbd1d8ba0b92a7"} Feb 16 17:19:53 crc kubenswrapper[4870]: I0216 17:19:53.172675 4870 scope.go:117] "RemoveContainer" containerID="536345f5604d70bac97e4bcd5628d34a588660ad504e6695e8f1ae9a70143cf8" Feb 16 17:19:53 crc kubenswrapper[4870]: I0216 17:19:53.172892 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5qck9" Feb 16 17:19:53 crc kubenswrapper[4870]: I0216 17:19:53.196924 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"523f77f6-c829-4d3d-99c1-45bafcb30ee3","Type":"ContainerStarted","Data":"f5a882f2f11a92dadabc50e0003aa9d74ba5ba163255892e82d9bb7ddff52a12"} Feb 16 17:19:53 crc kubenswrapper[4870]: I0216 17:19:53.214306 4870 generic.go:334] "Generic (PLEG): container finished" podID="d027dcfc-cbb1-4c78-b55f-0ed148b1faad" containerID="149a313529f1889f323e8d01ea09a58bd9c3ca4118908686140007106438a56b" exitCode=0 Feb 16 17:19:53 crc kubenswrapper[4870]: I0216 17:19:53.214354 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d027dcfc-cbb1-4c78-b55f-0ed148b1faad","Type":"ContainerDied","Data":"149a313529f1889f323e8d01ea09a58bd9c3ca4118908686140007106438a56b"} Feb 16 17:19:53 crc kubenswrapper[4870]: I0216 17:19:53.234644 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.929533316 podStartE2EDuration="51.234617993s" podCreationTimestamp="2026-02-16 17:19:02 +0000 UTC" firstStartedPulling="2026-02-16 17:19:18.615793424 +0000 UTC m=+1163.099257818" lastFinishedPulling="2026-02-16 17:19:51.920878111 +0000 UTC m=+1196.404342495" observedRunningTime="2026-02-16 17:19:53.220601088 +0000 UTC m=+1197.704065472" watchObservedRunningTime="2026-02-16 17:19:53.234617993 +0000 UTC m=+1197.718082367" Feb 16 17:19:53 crc kubenswrapper[4870]: I0216 17:19:53.249643 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn" Feb 16 17:19:53 crc kubenswrapper[4870]: I0216 17:19:53.379323 4870 scope.go:117] "RemoveContainer" containerID="bbd7d2f61ec90169be154d4babaade2aa032ea16deff8eecb73e82a7a0b1be08" Feb 16 17:19:53 crc kubenswrapper[4870]: I0216 17:19:53.425936 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5qck9"] Feb 16 17:19:53 crc kubenswrapper[4870]: I0216 17:19:53.433042 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5qck9"] Feb 16 17:19:53 crc kubenswrapper[4870]: I0216 17:19:53.590512 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 17:19:54 crc kubenswrapper[4870]: I0216 17:19:54.098966 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="d158e8d5-206e-4289-a1e5-247fddf29a11" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 17:19:54 crc kubenswrapper[4870]: I0216 17:19:54.225873 4870 generic.go:334] "Generic (PLEG): container finished" podID="bdf9770f-4fe7-4b42-9968-4fc4461ef6aa" containerID="af6b227d7b7b03f2911132c3978f36c7186192e7f004d242945d6bf5393a016e" exitCode=0 Feb 16 17:19:54 crc kubenswrapper[4870]: I0216 17:19:54.233581 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="743bd071-f9bd-4948-b99b-cd3e29bfe49e" path="/var/lib/kubelet/pods/743bd071-f9bd-4948-b99b-cd3e29bfe49e/volumes" Feb 16 17:19:54 crc kubenswrapper[4870]: I0216 17:19:54.234163 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gnnq2" event={"ID":"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa","Type":"ContainerDied","Data":"af6b227d7b7b03f2911132c3978f36c7186192e7f004d242945d6bf5393a016e"} Feb 16 17:19:54 crc kubenswrapper[4870]: I0216 17:19:54.234203 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d027dcfc-cbb1-4c78-b55f-0ed148b1faad","Type":"ContainerStarted","Data":"eba933c65650dfa704ebbcb4f9f24dc7c1383a1fe33332205710e12a11822238"} Feb 16 17:19:54 crc kubenswrapper[4870]: I0216 17:19:54.234214 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"66aba020-76f1-4cf7-992b-0745bd3c3512","Type":"ContainerStarted","Data":"9a61a3b4e97d8de8890da5ebe56574c8f2c95e159f2f31acea9791e6037bd75c"} Feb 16 17:19:54 crc kubenswrapper[4870]: I0216 17:19:54.234383 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:19:54 crc kubenswrapper[4870]: I0216 17:19:54.261020 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.256534942 podStartE2EDuration="59.26099945s" podCreationTimestamp="2026-02-16 17:18:55 +0000 UTC" firstStartedPulling="2026-02-16 17:18:57.833196637 +0000 UTC m=+1142.316661021" lastFinishedPulling="2026-02-16 17:19:18.837661145 +0000 UTC m=+1163.321125529" observedRunningTime="2026-02-16 17:19:54.254314481 +0000 UTC m=+1198.737778875" watchObservedRunningTime="2026-02-16 17:19:54.26099945 +0000 UTC m=+1198.744463834" Feb 16 17:19:54 crc kubenswrapper[4870]: I0216 17:19:54.304281 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.923702088 podStartE2EDuration="59.304262627s" podCreationTimestamp="2026-02-16 17:18:55 +0000 UTC" firstStartedPulling="2026-02-16 17:18:57.329648881 +0000 UTC m=+1141.813113265" lastFinishedPulling="2026-02-16 17:19:18.71020941 +0000 UTC m=+1163.193673804" observedRunningTime="2026-02-16 17:19:54.296810047 +0000 UTC m=+1198.780274451" watchObservedRunningTime="2026-02-16 17:19:54.304262627 +0000 UTC m=+1198.787727011" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.577024 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-5r2tl"] Feb 16 17:19:55 crc kubenswrapper[4870]: E0216 17:19:55.577781 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359ff9c1-712f-4a98-b617-c94a4f7a1843" containerName="mariadb-database-create" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.577795 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="359ff9c1-712f-4a98-b617-c94a4f7a1843" containerName="mariadb-database-create" Feb 16 17:19:55 crc kubenswrapper[4870]: E0216 17:19:55.577813 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743bd071-f9bd-4948-b99b-cd3e29bfe49e" containerName="init" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.577820 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="743bd071-f9bd-4948-b99b-cd3e29bfe49e" containerName="init" Feb 16 17:19:55 crc kubenswrapper[4870]: E0216 17:19:55.577831 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e18a14e-1e9e-44d9-8ed9-a93214973da3" containerName="mariadb-account-create-update" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.577838 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e18a14e-1e9e-44d9-8ed9-a93214973da3" containerName="mariadb-account-create-update" Feb 16 17:19:55 crc kubenswrapper[4870]: E0216 17:19:55.577844 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcfc139e-ad4c-4214-9403-73634951cd57" containerName="mariadb-database-create" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.577850 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcfc139e-ad4c-4214-9403-73634951cd57" containerName="mariadb-database-create" Feb 16 17:19:55 crc kubenswrapper[4870]: E0216 17:19:55.577858 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f7cff5-8546-44a7-8769-c64a9cf7049d" containerName="mariadb-account-create-update" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.577864 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f7cff5-8546-44a7-8769-c64a9cf7049d" containerName="mariadb-account-create-update" Feb 16 17:19:55 crc kubenswrapper[4870]: E0216 17:19:55.577875 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c2c95a8-3204-4882-b77c-4f09f82f9b14" containerName="mariadb-database-create" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.577880 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c2c95a8-3204-4882-b77c-4f09f82f9b14" containerName="mariadb-database-create" Feb 16 17:19:55 crc kubenswrapper[4870]: E0216 17:19:55.577890 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="661affca-3ccf-42b7-9095-eb1dbd2e38fb" containerName="mariadb-account-create-update" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.577896 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="661affca-3ccf-42b7-9095-eb1dbd2e38fb" containerName="mariadb-account-create-update" Feb 16 17:19:55 crc kubenswrapper[4870]: E0216 17:19:55.577905 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6500ae3-d348-4473-81db-c795446ba15d" containerName="mariadb-account-create-update" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.577911 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6500ae3-d348-4473-81db-c795446ba15d" containerName="mariadb-account-create-update" Feb 16 17:19:55 crc kubenswrapper[4870]: E0216 17:19:55.577927 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743bd071-f9bd-4948-b99b-cd3e29bfe49e" containerName="dnsmasq-dns" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.577934 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="743bd071-f9bd-4948-b99b-cd3e29bfe49e" containerName="dnsmasq-dns" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.578121 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="743bd071-f9bd-4948-b99b-cd3e29bfe49e" containerName="dnsmasq-dns" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.578156 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f7cff5-8546-44a7-8769-c64a9cf7049d" containerName="mariadb-account-create-update" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.578183 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="661affca-3ccf-42b7-9095-eb1dbd2e38fb" containerName="mariadb-account-create-update" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.578198 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="359ff9c1-712f-4a98-b617-c94a4f7a1843" containerName="mariadb-database-create" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.578213 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e18a14e-1e9e-44d9-8ed9-a93214973da3" containerName="mariadb-account-create-update" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.578226 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcfc139e-ad4c-4214-9403-73634951cd57" containerName="mariadb-database-create" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.578241 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c2c95a8-3204-4882-b77c-4f09f82f9b14" containerName="mariadb-database-create" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.578258 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6500ae3-d348-4473-81db-c795446ba15d" containerName="mariadb-account-create-update" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.579221 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5r2tl" Feb 16 17:19:55 crc kubenswrapper[4870]: W0216 17:19:55.581484 4870 reflector.go:561] object-"openstack"/"glance-config-data": failed to list *v1.Secret: secrets "glance-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 16 17:19:55 crc kubenswrapper[4870]: E0216 17:19:55.581532 4870 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"glance-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.582636 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wpt8f" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.591684 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5r2tl"] Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.644584 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.698017 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998e2386-0941-4f2b-8e23-d77138831ad4-combined-ca-bundle\") pod \"glance-db-sync-5r2tl\" (UID: \"998e2386-0941-4f2b-8e23-d77138831ad4\") " pod="openstack/glance-db-sync-5r2tl" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.698209 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/998e2386-0941-4f2b-8e23-d77138831ad4-db-sync-config-data\") pod \"glance-db-sync-5r2tl\" (UID: \"998e2386-0941-4f2b-8e23-d77138831ad4\") " pod="openstack/glance-db-sync-5r2tl" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.698425 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rsdg\" (UniqueName: \"kubernetes.io/projected/998e2386-0941-4f2b-8e23-d77138831ad4-kube-api-access-5rsdg\") pod \"glance-db-sync-5r2tl\" (UID: \"998e2386-0941-4f2b-8e23-d77138831ad4\") " pod="openstack/glance-db-sync-5r2tl" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.698575 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998e2386-0941-4f2b-8e23-d77138831ad4-config-data\") pod \"glance-db-sync-5r2tl\" (UID: \"998e2386-0941-4f2b-8e23-d77138831ad4\") " pod="openstack/glance-db-sync-5r2tl" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.800215 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-ring-data-devices\") pod \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.800303 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-etc-swift\") pod \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.800391 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x699\" (UniqueName: \"kubernetes.io/projected/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-kube-api-access-8x699\") pod \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.800510 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-combined-ca-bundle\") pod \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.800544 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-dispersionconf\") pod \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.800575 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-swiftconf\") pod \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.800636 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-scripts\") pod \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\" (UID: \"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa\") " Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.800850 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "bdf9770f-4fe7-4b42-9968-4fc4461ef6aa" (UID: "bdf9770f-4fe7-4b42-9968-4fc4461ef6aa"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.800881 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998e2386-0941-4f2b-8e23-d77138831ad4-config-data\") pod \"glance-db-sync-5r2tl\" (UID: \"998e2386-0941-4f2b-8e23-d77138831ad4\") " pod="openstack/glance-db-sync-5r2tl" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.800967 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998e2386-0941-4f2b-8e23-d77138831ad4-combined-ca-bundle\") pod \"glance-db-sync-5r2tl\" (UID: \"998e2386-0941-4f2b-8e23-d77138831ad4\") " pod="openstack/glance-db-sync-5r2tl" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.801051 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/998e2386-0941-4f2b-8e23-d77138831ad4-db-sync-config-data\") pod \"glance-db-sync-5r2tl\" (UID: \"998e2386-0941-4f2b-8e23-d77138831ad4\") " pod="openstack/glance-db-sync-5r2tl" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.801140 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rsdg\" (UniqueName: \"kubernetes.io/projected/998e2386-0941-4f2b-8e23-d77138831ad4-kube-api-access-5rsdg\") pod \"glance-db-sync-5r2tl\" (UID: \"998e2386-0941-4f2b-8e23-d77138831ad4\") " pod="openstack/glance-db-sync-5r2tl" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.801163 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bdf9770f-4fe7-4b42-9968-4fc4461ef6aa" (UID: "bdf9770f-4fe7-4b42-9968-4fc4461ef6aa"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.801253 4870 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.807809 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998e2386-0941-4f2b-8e23-d77138831ad4-combined-ca-bundle\") pod \"glance-db-sync-5r2tl\" (UID: \"998e2386-0941-4f2b-8e23-d77138831ad4\") " pod="openstack/glance-db-sync-5r2tl" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.816775 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-kube-api-access-8x699" (OuterVolumeSpecName: "kube-api-access-8x699") pod "bdf9770f-4fe7-4b42-9968-4fc4461ef6aa" (UID: "bdf9770f-4fe7-4b42-9968-4fc4461ef6aa"). InnerVolumeSpecName "kube-api-access-8x699". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.823710 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "bdf9770f-4fe7-4b42-9968-4fc4461ef6aa" (UID: "bdf9770f-4fe7-4b42-9968-4fc4461ef6aa"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.831022 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rsdg\" (UniqueName: \"kubernetes.io/projected/998e2386-0941-4f2b-8e23-d77138831ad4-kube-api-access-5rsdg\") pod \"glance-db-sync-5r2tl\" (UID: \"998e2386-0941-4f2b-8e23-d77138831ad4\") " pod="openstack/glance-db-sync-5r2tl" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.846486 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-scripts" (OuterVolumeSpecName: "scripts") pod "bdf9770f-4fe7-4b42-9968-4fc4461ef6aa" (UID: "bdf9770f-4fe7-4b42-9968-4fc4461ef6aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.848782 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "bdf9770f-4fe7-4b42-9968-4fc4461ef6aa" (UID: "bdf9770f-4fe7-4b42-9968-4fc4461ef6aa"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.871249 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdf9770f-4fe7-4b42-9968-4fc4461ef6aa" (UID: "bdf9770f-4fe7-4b42-9968-4fc4461ef6aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.902924 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.902986 4870 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.902999 4870 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.903010 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.903023 4870 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:55 crc kubenswrapper[4870]: I0216 17:19:55.903034 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x699\" (UniqueName: \"kubernetes.io/projected/bdf9770f-4fe7-4b42-9968-4fc4461ef6aa-kube-api-access-8x699\") on node \"crc\" DevicePath \"\"" Feb 16 17:19:56 crc kubenswrapper[4870]: I0216 17:19:56.285203 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gnnq2" event={"ID":"bdf9770f-4fe7-4b42-9968-4fc4461ef6aa","Type":"ContainerDied","Data":"1a131e7621c652f43905e4373127e03802ff08ad775d473f75b4cddb2534fc6b"} Feb 16 17:19:56 crc kubenswrapper[4870]: I0216 17:19:56.285247 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a131e7621c652f43905e4373127e03802ff08ad775d473f75b4cddb2534fc6b" Feb 16 17:19:56 crc kubenswrapper[4870]: I0216 17:19:56.285338 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gnnq2" Feb 16 17:19:56 crc kubenswrapper[4870]: I0216 17:19:56.652000 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 16 17:19:56 crc kubenswrapper[4870]: I0216 17:19:56.669301 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/998e2386-0941-4f2b-8e23-d77138831ad4-db-sync-config-data\") pod \"glance-db-sync-5r2tl\" (UID: \"998e2386-0941-4f2b-8e23-d77138831ad4\") " pod="openstack/glance-db-sync-5r2tl" Feb 16 17:19:56 crc kubenswrapper[4870]: I0216 17:19:56.685378 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998e2386-0941-4f2b-8e23-d77138831ad4-config-data\") pod \"glance-db-sync-5r2tl\" (UID: \"998e2386-0941-4f2b-8e23-d77138831ad4\") " pod="openstack/glance-db-sync-5r2tl" Feb 16 17:19:56 crc kubenswrapper[4870]: I0216 17:19:56.865256 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wpt8f" Feb 16 17:19:56 crc kubenswrapper[4870]: I0216 17:19:56.873274 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5r2tl" Feb 16 17:19:57 crc kubenswrapper[4870]: I0216 17:19:57.053420 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 17:19:57 crc kubenswrapper[4870]: I0216 17:19:57.155020 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wgh2j"] Feb 16 17:19:57 crc kubenswrapper[4870]: E0216 17:19:57.155402 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf9770f-4fe7-4b42-9968-4fc4461ef6aa" containerName="swift-ring-rebalance" Feb 16 17:19:57 crc kubenswrapper[4870]: I0216 17:19:57.155419 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf9770f-4fe7-4b42-9968-4fc4461ef6aa" containerName="swift-ring-rebalance" Feb 16 17:19:57 crc kubenswrapper[4870]: I0216 17:19:57.155602 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf9770f-4fe7-4b42-9968-4fc4461ef6aa" containerName="swift-ring-rebalance" Feb 16 17:19:57 crc kubenswrapper[4870]: I0216 17:19:57.156252 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wgh2j" Feb 16 17:19:57 crc kubenswrapper[4870]: I0216 17:19:57.157569 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 17:19:57 crc kubenswrapper[4870]: I0216 17:19:57.174274 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wgh2j"] Feb 16 17:19:57 crc kubenswrapper[4870]: I0216 17:19:57.333034 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9e28213-754a-4478-8b49-310d4cb4e8bc-operator-scripts\") pod \"root-account-create-update-wgh2j\" (UID: \"e9e28213-754a-4478-8b49-310d4cb4e8bc\") " pod="openstack/root-account-create-update-wgh2j" Feb 16 17:19:57 crc kubenswrapper[4870]: I0216 17:19:57.333102 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssrfn\" (UniqueName: \"kubernetes.io/projected/e9e28213-754a-4478-8b49-310d4cb4e8bc-kube-api-access-ssrfn\") pod \"root-account-create-update-wgh2j\" (UID: \"e9e28213-754a-4478-8b49-310d4cb4e8bc\") " pod="openstack/root-account-create-update-wgh2j" Feb 16 17:19:57 crc kubenswrapper[4870]: I0216 17:19:57.436468 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9e28213-754a-4478-8b49-310d4cb4e8bc-operator-scripts\") pod \"root-account-create-update-wgh2j\" (UID: \"e9e28213-754a-4478-8b49-310d4cb4e8bc\") " pod="openstack/root-account-create-update-wgh2j" Feb 16 17:19:57 crc kubenswrapper[4870]: I0216 17:19:57.436550 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssrfn\" (UniqueName: \"kubernetes.io/projected/e9e28213-754a-4478-8b49-310d4cb4e8bc-kube-api-access-ssrfn\") pod \"root-account-create-update-wgh2j\" (UID: \"e9e28213-754a-4478-8b49-310d4cb4e8bc\") " pod="openstack/root-account-create-update-wgh2j" Feb 16 17:19:57 crc kubenswrapper[4870]: I0216 17:19:57.437382 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9e28213-754a-4478-8b49-310d4cb4e8bc-operator-scripts\") pod \"root-account-create-update-wgh2j\" (UID: \"e9e28213-754a-4478-8b49-310d4cb4e8bc\") " pod="openstack/root-account-create-update-wgh2j" Feb 16 17:19:57 crc kubenswrapper[4870]: I0216 17:19:57.447272 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5r2tl"] Feb 16 17:19:57 crc kubenswrapper[4870]: I0216 17:19:57.475703 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssrfn\" (UniqueName: \"kubernetes.io/projected/e9e28213-754a-4478-8b49-310d4cb4e8bc-kube-api-access-ssrfn\") pod \"root-account-create-update-wgh2j\" (UID: \"e9e28213-754a-4478-8b49-310d4cb4e8bc\") " pod="openstack/root-account-create-update-wgh2j" Feb 16 17:19:57 crc kubenswrapper[4870]: I0216 17:19:57.482510 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wgh2j" Feb 16 17:19:57 crc kubenswrapper[4870]: I0216 17:19:57.528685 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 16 17:19:57 crc kubenswrapper[4870]: I0216 17:19:57.975825 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wgh2j"] Feb 16 17:19:57 crc kubenswrapper[4870]: W0216 17:19:57.976102 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9e28213_754a_4478_8b49_310d4cb4e8bc.slice/crio-67c26127858d680e03c548dbf2ae102c6062ff864025647cde6a537865b7fad2 WatchSource:0}: Error finding container 67c26127858d680e03c548dbf2ae102c6062ff864025647cde6a537865b7fad2: Status 404 returned error can't find the container with id 67c26127858d680e03c548dbf2ae102c6062ff864025647cde6a537865b7fad2 Feb 16 17:19:58 crc kubenswrapper[4870]: I0216 17:19:58.315257 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5r2tl" event={"ID":"998e2386-0941-4f2b-8e23-d77138831ad4","Type":"ContainerStarted","Data":"c7a7e3522f626f904eb99bdb545b15d439a783cf30da8648720780fb9c554597"} Feb 16 17:19:58 crc kubenswrapper[4870]: I0216 17:19:58.318810 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wgh2j" event={"ID":"e9e28213-754a-4478-8b49-310d4cb4e8bc","Type":"ContainerStarted","Data":"4d67941051f0bb1162b8b38cc14716c6a202053b1fb2fcc798f8cc37fbbd5355"} Feb 16 17:19:58 crc kubenswrapper[4870]: I0216 17:19:58.318875 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wgh2j" event={"ID":"e9e28213-754a-4478-8b49-310d4cb4e8bc","Type":"ContainerStarted","Data":"67c26127858d680e03c548dbf2ae102c6062ff864025647cde6a537865b7fad2"} Feb 16 17:19:58 crc kubenswrapper[4870]: I0216 17:19:58.349289 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-wgh2j" podStartSLOduration=1.34926768 podStartE2EDuration="1.34926768s" podCreationTimestamp="2026-02-16 17:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:19:58.342761576 +0000 UTC m=+1202.826225960" watchObservedRunningTime="2026-02-16 17:19:58.34926768 +0000 UTC m=+1202.832732064" Feb 16 17:19:59 crc kubenswrapper[4870]: I0216 17:19:59.333532 4870 generic.go:334] "Generic (PLEG): container finished" podID="e9e28213-754a-4478-8b49-310d4cb4e8bc" containerID="4d67941051f0bb1162b8b38cc14716c6a202053b1fb2fcc798f8cc37fbbd5355" exitCode=0 Feb 16 17:19:59 crc kubenswrapper[4870]: I0216 17:19:59.333579 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wgh2j" event={"ID":"e9e28213-754a-4478-8b49-310d4cb4e8bc","Type":"ContainerDied","Data":"4d67941051f0bb1162b8b38cc14716c6a202053b1fb2fcc798f8cc37fbbd5355"} Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.139281 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ktsg2" podUID="2f4b2faa-7ab7-40c8-a28f-d93749011dbe" containerName="ovn-controller" probeResult="failure" output=< Feb 16 17:20:01 crc kubenswrapper[4870]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 17:20:01 crc kubenswrapper[4870]: > Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.174588 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.175826 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rh6tb" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.428364 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ktsg2-config-wfrgb"] Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.445641 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.453624 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.473824 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ktsg2-config-wfrgb"] Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.524284 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c70a0819-992e-4dd0-96b8-5970678cca52-var-run\") pod \"ovn-controller-ktsg2-config-wfrgb\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.524388 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c70a0819-992e-4dd0-96b8-5970678cca52-var-log-ovn\") pod \"ovn-controller-ktsg2-config-wfrgb\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.524443 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c70a0819-992e-4dd0-96b8-5970678cca52-additional-scripts\") pod \"ovn-controller-ktsg2-config-wfrgb\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.524645 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c70a0819-992e-4dd0-96b8-5970678cca52-var-run-ovn\") pod \"ovn-controller-ktsg2-config-wfrgb\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.524776 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c70a0819-992e-4dd0-96b8-5970678cca52-scripts\") pod \"ovn-controller-ktsg2-config-wfrgb\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.524874 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbvmq\" (UniqueName: \"kubernetes.io/projected/c70a0819-992e-4dd0-96b8-5970678cca52-kube-api-access-tbvmq\") pod \"ovn-controller-ktsg2-config-wfrgb\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.627926 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c70a0819-992e-4dd0-96b8-5970678cca52-var-run\") pod \"ovn-controller-ktsg2-config-wfrgb\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.628020 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c70a0819-992e-4dd0-96b8-5970678cca52-var-log-ovn\") pod \"ovn-controller-ktsg2-config-wfrgb\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.628062 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c70a0819-992e-4dd0-96b8-5970678cca52-additional-scripts\") pod \"ovn-controller-ktsg2-config-wfrgb\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.628153 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c70a0819-992e-4dd0-96b8-5970678cca52-var-run-ovn\") pod \"ovn-controller-ktsg2-config-wfrgb\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.628212 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c70a0819-992e-4dd0-96b8-5970678cca52-scripts\") pod \"ovn-controller-ktsg2-config-wfrgb\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.628217 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c70a0819-992e-4dd0-96b8-5970678cca52-var-run\") pod \"ovn-controller-ktsg2-config-wfrgb\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.628274 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbvmq\" (UniqueName: \"kubernetes.io/projected/c70a0819-992e-4dd0-96b8-5970678cca52-kube-api-access-tbvmq\") pod \"ovn-controller-ktsg2-config-wfrgb\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.628874 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c70a0819-992e-4dd0-96b8-5970678cca52-additional-scripts\") pod \"ovn-controller-ktsg2-config-wfrgb\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.628935 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c70a0819-992e-4dd0-96b8-5970678cca52-var-log-ovn\") pod \"ovn-controller-ktsg2-config-wfrgb\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.628987 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c70a0819-992e-4dd0-96b8-5970678cca52-var-run-ovn\") pod \"ovn-controller-ktsg2-config-wfrgb\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.631042 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c70a0819-992e-4dd0-96b8-5970678cca52-scripts\") pod \"ovn-controller-ktsg2-config-wfrgb\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.648210 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbvmq\" (UniqueName: \"kubernetes.io/projected/c70a0819-992e-4dd0-96b8-5970678cca52-kube-api-access-tbvmq\") pod \"ovn-controller-ktsg2-config-wfrgb\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:01 crc kubenswrapper[4870]: I0216 17:20:01.796802 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:02 crc kubenswrapper[4870]: I0216 17:20:02.162882 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wgh2j" Feb 16 17:20:02 crc kubenswrapper[4870]: I0216 17:20:02.263688 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssrfn\" (UniqueName: \"kubernetes.io/projected/e9e28213-754a-4478-8b49-310d4cb4e8bc-kube-api-access-ssrfn\") pod \"e9e28213-754a-4478-8b49-310d4cb4e8bc\" (UID: \"e9e28213-754a-4478-8b49-310d4cb4e8bc\") " Feb 16 17:20:02 crc kubenswrapper[4870]: I0216 17:20:02.263829 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9e28213-754a-4478-8b49-310d4cb4e8bc-operator-scripts\") pod \"e9e28213-754a-4478-8b49-310d4cb4e8bc\" (UID: \"e9e28213-754a-4478-8b49-310d4cb4e8bc\") " Feb 16 17:20:02 crc kubenswrapper[4870]: I0216 17:20:02.264632 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9e28213-754a-4478-8b49-310d4cb4e8bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e9e28213-754a-4478-8b49-310d4cb4e8bc" (UID: "e9e28213-754a-4478-8b49-310d4cb4e8bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:02 crc kubenswrapper[4870]: I0216 17:20:02.269533 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9e28213-754a-4478-8b49-310d4cb4e8bc-kube-api-access-ssrfn" (OuterVolumeSpecName: "kube-api-access-ssrfn") pod "e9e28213-754a-4478-8b49-310d4cb4e8bc" (UID: "e9e28213-754a-4478-8b49-310d4cb4e8bc"). InnerVolumeSpecName "kube-api-access-ssrfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:02 crc kubenswrapper[4870]: I0216 17:20:02.366165 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssrfn\" (UniqueName: \"kubernetes.io/projected/e9e28213-754a-4478-8b49-310d4cb4e8bc-kube-api-access-ssrfn\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:02 crc kubenswrapper[4870]: I0216 17:20:02.366195 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9e28213-754a-4478-8b49-310d4cb4e8bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:02 crc kubenswrapper[4870]: I0216 17:20:02.375194 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wgh2j" event={"ID":"e9e28213-754a-4478-8b49-310d4cb4e8bc","Type":"ContainerDied","Data":"67c26127858d680e03c548dbf2ae102c6062ff864025647cde6a537865b7fad2"} Feb 16 17:20:02 crc kubenswrapper[4870]: I0216 17:20:02.375237 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67c26127858d680e03c548dbf2ae102c6062ff864025647cde6a537865b7fad2" Feb 16 17:20:02 crc kubenswrapper[4870]: I0216 17:20:02.375273 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wgh2j" Feb 16 17:20:02 crc kubenswrapper[4870]: I0216 17:20:02.533722 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ktsg2-config-wfrgb"] Feb 16 17:20:02 crc kubenswrapper[4870]: W0216 17:20:02.537698 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc70a0819_992e_4dd0_96b8_5970678cca52.slice/crio-7fe573f38c7d208ac95003dcdbfd7c8c05911f656faeba4bc06d57990f9a7339 WatchSource:0}: Error finding container 7fe573f38c7d208ac95003dcdbfd7c8c05911f656faeba4bc06d57990f9a7339: Status 404 returned error can't find the container with id 7fe573f38c7d208ac95003dcdbfd7c8c05911f656faeba4bc06d57990f9a7339 Feb 16 17:20:03 crc kubenswrapper[4870]: I0216 17:20:03.386879 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ktsg2-config-wfrgb" event={"ID":"c70a0819-992e-4dd0-96b8-5970678cca52","Type":"ContainerStarted","Data":"17b0f62ad0b46c0568574da2a40168c409f3f6bfae5dfbb09cdd75a76e196661"} Feb 16 17:20:03 crc kubenswrapper[4870]: I0216 17:20:03.386931 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ktsg2-config-wfrgb" event={"ID":"c70a0819-992e-4dd0-96b8-5970678cca52","Type":"ContainerStarted","Data":"7fe573f38c7d208ac95003dcdbfd7c8c05911f656faeba4bc06d57990f9a7339"} Feb 16 17:20:03 crc kubenswrapper[4870]: I0216 17:20:03.419043 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ktsg2-config-wfrgb" podStartSLOduration=2.419021244 podStartE2EDuration="2.419021244s" podCreationTimestamp="2026-02-16 17:20:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:20:03.411791531 +0000 UTC m=+1207.895255925" watchObservedRunningTime="2026-02-16 17:20:03.419021244 +0000 UTC m=+1207.902485638" Feb 16 17:20:03 crc kubenswrapper[4870]: I0216 17:20:03.590074 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:03 crc kubenswrapper[4870]: I0216 17:20:03.594916 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:04 crc kubenswrapper[4870]: I0216 17:20:04.101235 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="d158e8d5-206e-4289-a1e5-247fddf29a11" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 17:20:04 crc kubenswrapper[4870]: I0216 17:20:04.419043 4870 generic.go:334] "Generic (PLEG): container finished" podID="c70a0819-992e-4dd0-96b8-5970678cca52" containerID="17b0f62ad0b46c0568574da2a40168c409f3f6bfae5dfbb09cdd75a76e196661" exitCode=0 Feb 16 17:20:04 crc kubenswrapper[4870]: I0216 17:20:04.419193 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ktsg2-config-wfrgb" event={"ID":"c70a0819-992e-4dd0-96b8-5970678cca52","Type":"ContainerDied","Data":"17b0f62ad0b46c0568574da2a40168c409f3f6bfae5dfbb09cdd75a76e196661"} Feb 16 17:20:04 crc kubenswrapper[4870]: I0216 17:20:04.421282 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:05 crc kubenswrapper[4870]: I0216 17:20:05.434384 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-etc-swift\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:20:05 crc kubenswrapper[4870]: I0216 17:20:05.446450 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/669a24d2-3e17-4ce1-aba2-c45d2a92683a-etc-swift\") pod \"swift-storage-0\" (UID: \"669a24d2-3e17-4ce1-aba2-c45d2a92683a\") " pod="openstack/swift-storage-0" Feb 16 17:20:05 crc kubenswrapper[4870]: I0216 17:20:05.579490 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 17:20:06 crc kubenswrapper[4870]: I0216 17:20:06.148684 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ktsg2" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.042373 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.042744 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="523f77f6-c829-4d3d-99c1-45bafcb30ee3" containerName="prometheus" containerID="cri-o://f8ffa535d7520af1c4e297425d560fd1a7020c5dc0ea84661efce87959918cc2" gracePeriod=600 Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.042837 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="523f77f6-c829-4d3d-99c1-45bafcb30ee3" containerName="config-reloader" containerID="cri-o://6184c5ab8567c650caaca098276056713018801c2c0b86f2e11eca261f48e205" gracePeriod=600 Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.042859 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="523f77f6-c829-4d3d-99c1-45bafcb30ee3" containerName="thanos-sidecar" containerID="cri-o://f5a882f2f11a92dadabc50e0003aa9d74ba5ba163255892e82d9bb7ddff52a12" gracePeriod=600 Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.064186 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.377063 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-kq5cz"] Feb 16 17:20:07 crc kubenswrapper[4870]: E0216 17:20:07.377498 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9e28213-754a-4478-8b49-310d4cb4e8bc" containerName="mariadb-account-create-update" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.377520 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e28213-754a-4478-8b49-310d4cb4e8bc" containerName="mariadb-account-create-update" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.381592 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9e28213-754a-4478-8b49-310d4cb4e8bc" containerName="mariadb-account-create-update" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.382441 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.382635 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kq5cz" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.402708 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kq5cz"] Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.484493 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a5551a3-6fd8-479b-ad29-488078cc5ad1-operator-scripts\") pod \"cinder-db-create-kq5cz\" (UID: \"3a5551a3-6fd8-479b-ad29-488078cc5ad1\") " pod="openstack/cinder-db-create-kq5cz" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.484662 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdrwj\" (UniqueName: \"kubernetes.io/projected/3a5551a3-6fd8-479b-ad29-488078cc5ad1-kube-api-access-xdrwj\") pod \"cinder-db-create-kq5cz\" (UID: \"3a5551a3-6fd8-479b-ad29-488078cc5ad1\") " pod="openstack/cinder-db-create-kq5cz" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.589862 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a5551a3-6fd8-479b-ad29-488078cc5ad1-operator-scripts\") pod \"cinder-db-create-kq5cz\" (UID: \"3a5551a3-6fd8-479b-ad29-488078cc5ad1\") " pod="openstack/cinder-db-create-kq5cz" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.590280 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdrwj\" (UniqueName: \"kubernetes.io/projected/3a5551a3-6fd8-479b-ad29-488078cc5ad1-kube-api-access-xdrwj\") pod \"cinder-db-create-kq5cz\" (UID: \"3a5551a3-6fd8-479b-ad29-488078cc5ad1\") " pod="openstack/cinder-db-create-kq5cz" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.591117 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a5551a3-6fd8-479b-ad29-488078cc5ad1-operator-scripts\") pod \"cinder-db-create-kq5cz\" (UID: \"3a5551a3-6fd8-479b-ad29-488078cc5ad1\") " pod="openstack/cinder-db-create-kq5cz" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.593764 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2953-account-create-update-c479c"] Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.595054 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2953-account-create-update-c479c" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.599537 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.604663 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2953-account-create-update-c479c"] Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.611204 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdrwj\" (UniqueName: \"kubernetes.io/projected/3a5551a3-6fd8-479b-ad29-488078cc5ad1-kube-api-access-xdrwj\") pod \"cinder-db-create-kq5cz\" (UID: \"3a5551a3-6fd8-479b-ad29-488078cc5ad1\") " pod="openstack/cinder-db-create-kq5cz" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.692386 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a05594c3-a7c9-4326-83d3-8602b8077b29-operator-scripts\") pod \"cinder-2953-account-create-update-c479c\" (UID: \"a05594c3-a7c9-4326-83d3-8602b8077b29\") " pod="openstack/cinder-2953-account-create-update-c479c" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.692465 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsm22\" (UniqueName: \"kubernetes.io/projected/a05594c3-a7c9-4326-83d3-8602b8077b29-kube-api-access-lsm22\") pod \"cinder-2953-account-create-update-c479c\" (UID: \"a05594c3-a7c9-4326-83d3-8602b8077b29\") " pod="openstack/cinder-2953-account-create-update-c479c" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.703513 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kq5cz" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.776635 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-9txqv"] Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.778186 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-9txqv" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.793752 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a05594c3-a7c9-4326-83d3-8602b8077b29-operator-scripts\") pod \"cinder-2953-account-create-update-c479c\" (UID: \"a05594c3-a7c9-4326-83d3-8602b8077b29\") " pod="openstack/cinder-2953-account-create-update-c479c" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.793812 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsm22\" (UniqueName: \"kubernetes.io/projected/a05594c3-a7c9-4326-83d3-8602b8077b29-kube-api-access-lsm22\") pod \"cinder-2953-account-create-update-c479c\" (UID: \"a05594c3-a7c9-4326-83d3-8602b8077b29\") " pod="openstack/cinder-2953-account-create-update-c479c" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.794811 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a05594c3-a7c9-4326-83d3-8602b8077b29-operator-scripts\") pod \"cinder-2953-account-create-update-c479c\" (UID: \"a05594c3-a7c9-4326-83d3-8602b8077b29\") " pod="openstack/cinder-2953-account-create-update-c479c" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.806200 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8204-account-create-update-96fmd"] Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.808767 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8204-account-create-update-96fmd" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.818733 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.837932 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-9txqv"] Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.852654 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsm22\" (UniqueName: \"kubernetes.io/projected/a05594c3-a7c9-4326-83d3-8602b8077b29-kube-api-access-lsm22\") pod \"cinder-2953-account-create-update-c479c\" (UID: \"a05594c3-a7c9-4326-83d3-8602b8077b29\") " pod="openstack/cinder-2953-account-create-update-c479c" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.864493 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8204-account-create-update-96fmd"] Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.896276 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70ee1b4f-d0ae-44c6-89fa-03712279a648-operator-scripts\") pod \"barbican-8204-account-create-update-96fmd\" (UID: \"70ee1b4f-d0ae-44c6-89fa-03712279a648\") " pod="openstack/barbican-8204-account-create-update-96fmd" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.896629 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7efe4b83-3d8c-4e4a-a7e0-0bbccb160506-operator-scripts\") pod \"cloudkitty-db-create-9txqv\" (UID: \"7efe4b83-3d8c-4e4a-a7e0-0bbccb160506\") " pod="openstack/cloudkitty-db-create-9txqv" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.896735 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f66mz\" (UniqueName: \"kubernetes.io/projected/70ee1b4f-d0ae-44c6-89fa-03712279a648-kube-api-access-f66mz\") pod \"barbican-8204-account-create-update-96fmd\" (UID: \"70ee1b4f-d0ae-44c6-89fa-03712279a648\") " pod="openstack/barbican-8204-account-create-update-96fmd" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.896839 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp8dg\" (UniqueName: \"kubernetes.io/projected/7efe4b83-3d8c-4e4a-a7e0-0bbccb160506-kube-api-access-hp8dg\") pod \"cloudkitty-db-create-9txqv\" (UID: \"7efe4b83-3d8c-4e4a-a7e0-0bbccb160506\") " pod="openstack/cloudkitty-db-create-9txqv" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.909830 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-dfjd4"] Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.911378 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dfjd4" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.923654 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-x74v2"] Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.925084 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x74v2" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.927909 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.930500 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.930725 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.930877 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pm4j6" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.951963 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dfjd4"] Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.964147 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2953-account-create-update-c479c" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.971613 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x74v2"] Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.998681 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7efe4b83-3d8c-4e4a-a7e0-0bbccb160506-operator-scripts\") pod \"cloudkitty-db-create-9txqv\" (UID: \"7efe4b83-3d8c-4e4a-a7e0-0bbccb160506\") " pod="openstack/cloudkitty-db-create-9txqv" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.998954 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c8030e8-0df0-478c-83b8-2144a0402358-operator-scripts\") pod \"neutron-db-create-dfjd4\" (UID: \"9c8030e8-0df0-478c-83b8-2144a0402358\") " pod="openstack/neutron-db-create-dfjd4" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.999138 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f66mz\" (UniqueName: \"kubernetes.io/projected/70ee1b4f-d0ae-44c6-89fa-03712279a648-kube-api-access-f66mz\") pod \"barbican-8204-account-create-update-96fmd\" (UID: \"70ee1b4f-d0ae-44c6-89fa-03712279a648\") " pod="openstack/barbican-8204-account-create-update-96fmd" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.999262 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp8dg\" (UniqueName: \"kubernetes.io/projected/7efe4b83-3d8c-4e4a-a7e0-0bbccb160506-kube-api-access-hp8dg\") pod \"cloudkitty-db-create-9txqv\" (UID: \"7efe4b83-3d8c-4e4a-a7e0-0bbccb160506\") " pod="openstack/cloudkitty-db-create-9txqv" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.999345 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/248e9264-b0ec-412b-aa16-0c3869d5f245-config-data\") pod \"keystone-db-sync-x74v2\" (UID: \"248e9264-b0ec-412b-aa16-0c3869d5f245\") " pod="openstack/keystone-db-sync-x74v2" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.999467 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/248e9264-b0ec-412b-aa16-0c3869d5f245-combined-ca-bundle\") pod \"keystone-db-sync-x74v2\" (UID: \"248e9264-b0ec-412b-aa16-0c3869d5f245\") " pod="openstack/keystone-db-sync-x74v2" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.999540 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g6ks\" (UniqueName: \"kubernetes.io/projected/248e9264-b0ec-412b-aa16-0c3869d5f245-kube-api-access-4g6ks\") pod \"keystone-db-sync-x74v2\" (UID: \"248e9264-b0ec-412b-aa16-0c3869d5f245\") " pod="openstack/keystone-db-sync-x74v2" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.999617 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70ee1b4f-d0ae-44c6-89fa-03712279a648-operator-scripts\") pod \"barbican-8204-account-create-update-96fmd\" (UID: \"70ee1b4f-d0ae-44c6-89fa-03712279a648\") " pod="openstack/barbican-8204-account-create-update-96fmd" Feb 16 17:20:07 crc kubenswrapper[4870]: I0216 17:20:07.999726 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wwn5\" (UniqueName: \"kubernetes.io/projected/9c8030e8-0df0-478c-83b8-2144a0402358-kube-api-access-6wwn5\") pod \"neutron-db-create-dfjd4\" (UID: \"9c8030e8-0df0-478c-83b8-2144a0402358\") " pod="openstack/neutron-db-create-dfjd4" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.001033 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7efe4b83-3d8c-4e4a-a7e0-0bbccb160506-operator-scripts\") pod \"cloudkitty-db-create-9txqv\" (UID: \"7efe4b83-3d8c-4e4a-a7e0-0bbccb160506\") " pod="openstack/cloudkitty-db-create-9txqv" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.001966 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70ee1b4f-d0ae-44c6-89fa-03712279a648-operator-scripts\") pod \"barbican-8204-account-create-update-96fmd\" (UID: \"70ee1b4f-d0ae-44c6-89fa-03712279a648\") " pod="openstack/barbican-8204-account-create-update-96fmd" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.006393 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9496-account-create-update-vvscs"] Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.008117 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9496-account-create-update-vvscs" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.020526 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.047521 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f66mz\" (UniqueName: \"kubernetes.io/projected/70ee1b4f-d0ae-44c6-89fa-03712279a648-kube-api-access-f66mz\") pod \"barbican-8204-account-create-update-96fmd\" (UID: \"70ee1b4f-d0ae-44c6-89fa-03712279a648\") " pod="openstack/barbican-8204-account-create-update-96fmd" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.047637 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9496-account-create-update-vvscs"] Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.047693 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp8dg\" (UniqueName: \"kubernetes.io/projected/7efe4b83-3d8c-4e4a-a7e0-0bbccb160506-kube-api-access-hp8dg\") pod \"cloudkitty-db-create-9txqv\" (UID: \"7efe4b83-3d8c-4e4a-a7e0-0bbccb160506\") " pod="openstack/cloudkitty-db-create-9txqv" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.074790 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-m2l86"] Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.076078 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m2l86" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.087205 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-m2l86"] Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.100739 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-9txqv" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.102566 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c8030e8-0df0-478c-83b8-2144a0402358-operator-scripts\") pod \"neutron-db-create-dfjd4\" (UID: \"9c8030e8-0df0-478c-83b8-2144a0402358\") " pod="openstack/neutron-db-create-dfjd4" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.102628 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/248e9264-b0ec-412b-aa16-0c3869d5f245-config-data\") pod \"keystone-db-sync-x74v2\" (UID: \"248e9264-b0ec-412b-aa16-0c3869d5f245\") " pod="openstack/keystone-db-sync-x74v2" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.102688 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/248e9264-b0ec-412b-aa16-0c3869d5f245-combined-ca-bundle\") pod \"keystone-db-sync-x74v2\" (UID: \"248e9264-b0ec-412b-aa16-0c3869d5f245\") " pod="openstack/keystone-db-sync-x74v2" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.102710 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g6ks\" (UniqueName: \"kubernetes.io/projected/248e9264-b0ec-412b-aa16-0c3869d5f245-kube-api-access-4g6ks\") pod \"keystone-db-sync-x74v2\" (UID: \"248e9264-b0ec-412b-aa16-0c3869d5f245\") " pod="openstack/keystone-db-sync-x74v2" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.102743 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b165ae4-b42a-4351-8bde-e88b7fa65137-operator-scripts\") pod \"neutron-9496-account-create-update-vvscs\" (UID: \"7b165ae4-b42a-4351-8bde-e88b7fa65137\") " pod="openstack/neutron-9496-account-create-update-vvscs" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.102780 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/387afd6f-c438-44b0-ba75-db7c0ecd911b-operator-scripts\") pod \"barbican-db-create-m2l86\" (UID: \"387afd6f-c438-44b0-ba75-db7c0ecd911b\") " pod="openstack/barbican-db-create-m2l86" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.102806 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wf4p\" (UniqueName: \"kubernetes.io/projected/7b165ae4-b42a-4351-8bde-e88b7fa65137-kube-api-access-4wf4p\") pod \"neutron-9496-account-create-update-vvscs\" (UID: \"7b165ae4-b42a-4351-8bde-e88b7fa65137\") " pod="openstack/neutron-9496-account-create-update-vvscs" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.102854 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wwn5\" (UniqueName: \"kubernetes.io/projected/9c8030e8-0df0-478c-83b8-2144a0402358-kube-api-access-6wwn5\") pod \"neutron-db-create-dfjd4\" (UID: \"9c8030e8-0df0-478c-83b8-2144a0402358\") " pod="openstack/neutron-db-create-dfjd4" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.102898 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdz2x\" (UniqueName: \"kubernetes.io/projected/387afd6f-c438-44b0-ba75-db7c0ecd911b-kube-api-access-rdz2x\") pod \"barbican-db-create-m2l86\" (UID: \"387afd6f-c438-44b0-ba75-db7c0ecd911b\") " pod="openstack/barbican-db-create-m2l86" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.103636 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c8030e8-0df0-478c-83b8-2144a0402358-operator-scripts\") pod \"neutron-db-create-dfjd4\" (UID: \"9c8030e8-0df0-478c-83b8-2144a0402358\") " pod="openstack/neutron-db-create-dfjd4" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.120743 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/248e9264-b0ec-412b-aa16-0c3869d5f245-combined-ca-bundle\") pod \"keystone-db-sync-x74v2\" (UID: \"248e9264-b0ec-412b-aa16-0c3869d5f245\") " pod="openstack/keystone-db-sync-x74v2" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.124030 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/248e9264-b0ec-412b-aa16-0c3869d5f245-config-data\") pod \"keystone-db-sync-x74v2\" (UID: \"248e9264-b0ec-412b-aa16-0c3869d5f245\") " pod="openstack/keystone-db-sync-x74v2" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.132241 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8204-account-create-update-96fmd" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.134265 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wwn5\" (UniqueName: \"kubernetes.io/projected/9c8030e8-0df0-478c-83b8-2144a0402358-kube-api-access-6wwn5\") pod \"neutron-db-create-dfjd4\" (UID: \"9c8030e8-0df0-478c-83b8-2144a0402358\") " pod="openstack/neutron-db-create-dfjd4" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.137618 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g6ks\" (UniqueName: \"kubernetes.io/projected/248e9264-b0ec-412b-aa16-0c3869d5f245-kube-api-access-4g6ks\") pod \"keystone-db-sync-x74v2\" (UID: \"248e9264-b0ec-412b-aa16-0c3869d5f245\") " pod="openstack/keystone-db-sync-x74v2" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.161034 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-93e7-account-create-update-54hdz"] Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.162191 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-93e7-account-create-update-54hdz" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.168097 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-93e7-account-create-update-54hdz"] Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.178015 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.205910 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdz2x\" (UniqueName: \"kubernetes.io/projected/387afd6f-c438-44b0-ba75-db7c0ecd911b-kube-api-access-rdz2x\") pod \"barbican-db-create-m2l86\" (UID: \"387afd6f-c438-44b0-ba75-db7c0ecd911b\") " pod="openstack/barbican-db-create-m2l86" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.206015 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4z9k\" (UniqueName: \"kubernetes.io/projected/0aaab981-61b9-43df-a72c-8543ad202980-kube-api-access-l4z9k\") pod \"cloudkitty-93e7-account-create-update-54hdz\" (UID: \"0aaab981-61b9-43df-a72c-8543ad202980\") " pod="openstack/cloudkitty-93e7-account-create-update-54hdz" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.206131 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0aaab981-61b9-43df-a72c-8543ad202980-operator-scripts\") pod \"cloudkitty-93e7-account-create-update-54hdz\" (UID: \"0aaab981-61b9-43df-a72c-8543ad202980\") " pod="openstack/cloudkitty-93e7-account-create-update-54hdz" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.206255 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b165ae4-b42a-4351-8bde-e88b7fa65137-operator-scripts\") pod \"neutron-9496-account-create-update-vvscs\" (UID: \"7b165ae4-b42a-4351-8bde-e88b7fa65137\") " pod="openstack/neutron-9496-account-create-update-vvscs" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.206324 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/387afd6f-c438-44b0-ba75-db7c0ecd911b-operator-scripts\") pod \"barbican-db-create-m2l86\" (UID: \"387afd6f-c438-44b0-ba75-db7c0ecd911b\") " pod="openstack/barbican-db-create-m2l86" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.206356 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wf4p\" (UniqueName: \"kubernetes.io/projected/7b165ae4-b42a-4351-8bde-e88b7fa65137-kube-api-access-4wf4p\") pod \"neutron-9496-account-create-update-vvscs\" (UID: \"7b165ae4-b42a-4351-8bde-e88b7fa65137\") " pod="openstack/neutron-9496-account-create-update-vvscs" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.207409 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b165ae4-b42a-4351-8bde-e88b7fa65137-operator-scripts\") pod \"neutron-9496-account-create-update-vvscs\" (UID: \"7b165ae4-b42a-4351-8bde-e88b7fa65137\") " pod="openstack/neutron-9496-account-create-update-vvscs" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.207963 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/387afd6f-c438-44b0-ba75-db7c0ecd911b-operator-scripts\") pod \"barbican-db-create-m2l86\" (UID: \"387afd6f-c438-44b0-ba75-db7c0ecd911b\") " pod="openstack/barbican-db-create-m2l86" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.226685 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdz2x\" (UniqueName: \"kubernetes.io/projected/387afd6f-c438-44b0-ba75-db7c0ecd911b-kube-api-access-rdz2x\") pod \"barbican-db-create-m2l86\" (UID: \"387afd6f-c438-44b0-ba75-db7c0ecd911b\") " pod="openstack/barbican-db-create-m2l86" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.229416 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wf4p\" (UniqueName: \"kubernetes.io/projected/7b165ae4-b42a-4351-8bde-e88b7fa65137-kube-api-access-4wf4p\") pod \"neutron-9496-account-create-update-vvscs\" (UID: \"7b165ae4-b42a-4351-8bde-e88b7fa65137\") " pod="openstack/neutron-9496-account-create-update-vvscs" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.255671 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dfjd4" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.280306 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x74v2" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.308130 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0aaab981-61b9-43df-a72c-8543ad202980-operator-scripts\") pod \"cloudkitty-93e7-account-create-update-54hdz\" (UID: \"0aaab981-61b9-43df-a72c-8543ad202980\") " pod="openstack/cloudkitty-93e7-account-create-update-54hdz" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.308370 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4z9k\" (UniqueName: \"kubernetes.io/projected/0aaab981-61b9-43df-a72c-8543ad202980-kube-api-access-l4z9k\") pod \"cloudkitty-93e7-account-create-update-54hdz\" (UID: \"0aaab981-61b9-43df-a72c-8543ad202980\") " pod="openstack/cloudkitty-93e7-account-create-update-54hdz" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.309779 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0aaab981-61b9-43df-a72c-8543ad202980-operator-scripts\") pod \"cloudkitty-93e7-account-create-update-54hdz\" (UID: \"0aaab981-61b9-43df-a72c-8543ad202980\") " pod="openstack/cloudkitty-93e7-account-create-update-54hdz" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.328667 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4z9k\" (UniqueName: \"kubernetes.io/projected/0aaab981-61b9-43df-a72c-8543ad202980-kube-api-access-l4z9k\") pod \"cloudkitty-93e7-account-create-update-54hdz\" (UID: \"0aaab981-61b9-43df-a72c-8543ad202980\") " pod="openstack/cloudkitty-93e7-account-create-update-54hdz" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.414308 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9496-account-create-update-vvscs" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.487447 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m2l86" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.499717 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-93e7-account-create-update-54hdz" Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.502714 4870 generic.go:334] "Generic (PLEG): container finished" podID="523f77f6-c829-4d3d-99c1-45bafcb30ee3" containerID="f5a882f2f11a92dadabc50e0003aa9d74ba5ba163255892e82d9bb7ddff52a12" exitCode=0 Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.502741 4870 generic.go:334] "Generic (PLEG): container finished" podID="523f77f6-c829-4d3d-99c1-45bafcb30ee3" containerID="6184c5ab8567c650caaca098276056713018801c2c0b86f2e11eca261f48e205" exitCode=0 Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.502764 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"523f77f6-c829-4d3d-99c1-45bafcb30ee3","Type":"ContainerDied","Data":"f5a882f2f11a92dadabc50e0003aa9d74ba5ba163255892e82d9bb7ddff52a12"} Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.502788 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"523f77f6-c829-4d3d-99c1-45bafcb30ee3","Type":"ContainerDied","Data":"6184c5ab8567c650caaca098276056713018801c2c0b86f2e11eca261f48e205"} Feb 16 17:20:08 crc kubenswrapper[4870]: I0216 17:20:08.590772 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="523f77f6-c829-4d3d-99c1-45bafcb30ee3" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.114:9090/-/ready\": dial tcp 10.217.0.114:9090: connect: connection refused" Feb 16 17:20:09 crc kubenswrapper[4870]: I0216 17:20:09.517597 4870 generic.go:334] "Generic (PLEG): container finished" podID="523f77f6-c829-4d3d-99c1-45bafcb30ee3" containerID="f8ffa535d7520af1c4e297425d560fd1a7020c5dc0ea84661efce87959918cc2" exitCode=0 Feb 16 17:20:09 crc kubenswrapper[4870]: I0216 17:20:09.517655 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"523f77f6-c829-4d3d-99c1-45bafcb30ee3","Type":"ContainerDied","Data":"f8ffa535d7520af1c4e297425d560fd1a7020c5dc0ea84661efce87959918cc2"} Feb 16 17:20:12 crc kubenswrapper[4870]: E0216 17:20:12.609484 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 16 17:20:12 crc kubenswrapper[4870]: E0216 17:20:12.609907 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rsdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-5r2tl_openstack(998e2386-0941-4f2b-8e23-d77138831ad4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:20:12 crc kubenswrapper[4870]: E0216 17:20:12.611203 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-5r2tl" podUID="998e2386-0941-4f2b-8e23-d77138831ad4" Feb 16 17:20:12 crc kubenswrapper[4870]: I0216 17:20:12.690385 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:12 crc kubenswrapper[4870]: I0216 17:20:12.704274 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c70a0819-992e-4dd0-96b8-5970678cca52-var-run\") pod \"c70a0819-992e-4dd0-96b8-5970678cca52\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " Feb 16 17:20:12 crc kubenswrapper[4870]: I0216 17:20:12.704479 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c70a0819-992e-4dd0-96b8-5970678cca52-var-run-ovn\") pod \"c70a0819-992e-4dd0-96b8-5970678cca52\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " Feb 16 17:20:12 crc kubenswrapper[4870]: I0216 17:20:12.704524 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c70a0819-992e-4dd0-96b8-5970678cca52-scripts\") pod \"c70a0819-992e-4dd0-96b8-5970678cca52\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " Feb 16 17:20:12 crc kubenswrapper[4870]: I0216 17:20:12.704724 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c70a0819-992e-4dd0-96b8-5970678cca52-var-log-ovn\") pod \"c70a0819-992e-4dd0-96b8-5970678cca52\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " Feb 16 17:20:12 crc kubenswrapper[4870]: I0216 17:20:12.706973 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c70a0819-992e-4dd0-96b8-5970678cca52-var-run" (OuterVolumeSpecName: "var-run") pod "c70a0819-992e-4dd0-96b8-5970678cca52" (UID: "c70a0819-992e-4dd0-96b8-5970678cca52"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:20:12 crc kubenswrapper[4870]: I0216 17:20:12.707087 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbvmq\" (UniqueName: \"kubernetes.io/projected/c70a0819-992e-4dd0-96b8-5970678cca52-kube-api-access-tbvmq\") pod \"c70a0819-992e-4dd0-96b8-5970678cca52\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " Feb 16 17:20:12 crc kubenswrapper[4870]: I0216 17:20:12.707142 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c70a0819-992e-4dd0-96b8-5970678cca52-additional-scripts\") pod \"c70a0819-992e-4dd0-96b8-5970678cca52\" (UID: \"c70a0819-992e-4dd0-96b8-5970678cca52\") " Feb 16 17:20:12 crc kubenswrapper[4870]: I0216 17:20:12.708099 4870 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c70a0819-992e-4dd0-96b8-5970678cca52-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:12 crc kubenswrapper[4870]: I0216 17:20:12.708820 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c70a0819-992e-4dd0-96b8-5970678cca52-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c70a0819-992e-4dd0-96b8-5970678cca52" (UID: "c70a0819-992e-4dd0-96b8-5970678cca52"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:20:12 crc kubenswrapper[4870]: I0216 17:20:12.708846 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c70a0819-992e-4dd0-96b8-5970678cca52-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c70a0819-992e-4dd0-96b8-5970678cca52" (UID: "c70a0819-992e-4dd0-96b8-5970678cca52"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:20:12 crc kubenswrapper[4870]: I0216 17:20:12.709231 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c70a0819-992e-4dd0-96b8-5970678cca52-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c70a0819-992e-4dd0-96b8-5970678cca52" (UID: "c70a0819-992e-4dd0-96b8-5970678cca52"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:12 crc kubenswrapper[4870]: I0216 17:20:12.709855 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c70a0819-992e-4dd0-96b8-5970678cca52-scripts" (OuterVolumeSpecName: "scripts") pod "c70a0819-992e-4dd0-96b8-5970678cca52" (UID: "c70a0819-992e-4dd0-96b8-5970678cca52"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:12 crc kubenswrapper[4870]: I0216 17:20:12.721705 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c70a0819-992e-4dd0-96b8-5970678cca52-kube-api-access-tbvmq" (OuterVolumeSpecName: "kube-api-access-tbvmq") pod "c70a0819-992e-4dd0-96b8-5970678cca52" (UID: "c70a0819-992e-4dd0-96b8-5970678cca52"). InnerVolumeSpecName "kube-api-access-tbvmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:12 crc kubenswrapper[4870]: I0216 17:20:12.810516 4870 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c70a0819-992e-4dd0-96b8-5970678cca52-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:12 crc kubenswrapper[4870]: I0216 17:20:12.810576 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbvmq\" (UniqueName: \"kubernetes.io/projected/c70a0819-992e-4dd0-96b8-5970678cca52-kube-api-access-tbvmq\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:12 crc kubenswrapper[4870]: I0216 17:20:12.810592 4870 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c70a0819-992e-4dd0-96b8-5970678cca52-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:12 crc kubenswrapper[4870]: I0216 17:20:12.810604 4870 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c70a0819-992e-4dd0-96b8-5970678cca52-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:12 crc kubenswrapper[4870]: I0216 17:20:12.810617 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c70a0819-992e-4dd0-96b8-5970678cca52-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.232437 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.332620 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/523f77f6-c829-4d3d-99c1-45bafcb30ee3-web-config\") pod \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.333022 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/523f77f6-c829-4d3d-99c1-45bafcb30ee3-prometheus-metric-storage-rulefiles-0\") pod \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.333055 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/523f77f6-c829-4d3d-99c1-45bafcb30ee3-config\") pod \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.333082 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2742\" (UniqueName: \"kubernetes.io/projected/523f77f6-c829-4d3d-99c1-45bafcb30ee3-kube-api-access-f2742\") pod \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.333224 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6\") pod \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.333259 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/523f77f6-c829-4d3d-99c1-45bafcb30ee3-prometheus-metric-storage-rulefiles-2\") pod \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.333319 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/523f77f6-c829-4d3d-99c1-45bafcb30ee3-tls-assets\") pod \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.333369 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/523f77f6-c829-4d3d-99c1-45bafcb30ee3-config-out\") pod \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.333419 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/523f77f6-c829-4d3d-99c1-45bafcb30ee3-thanos-prometheus-http-client-file\") pod \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.333535 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/523f77f6-c829-4d3d-99c1-45bafcb30ee3-prometheus-metric-storage-rulefiles-1\") pod \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\" (UID: \"523f77f6-c829-4d3d-99c1-45bafcb30ee3\") " Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.334546 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/523f77f6-c829-4d3d-99c1-45bafcb30ee3-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "523f77f6-c829-4d3d-99c1-45bafcb30ee3" (UID: "523f77f6-c829-4d3d-99c1-45bafcb30ee3"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.335714 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/523f77f6-c829-4d3d-99c1-45bafcb30ee3-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "523f77f6-c829-4d3d-99c1-45bafcb30ee3" (UID: "523f77f6-c829-4d3d-99c1-45bafcb30ee3"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.335734 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/523f77f6-c829-4d3d-99c1-45bafcb30ee3-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "523f77f6-c829-4d3d-99c1-45bafcb30ee3" (UID: "523f77f6-c829-4d3d-99c1-45bafcb30ee3"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.350834 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/523f77f6-c829-4d3d-99c1-45bafcb30ee3-kube-api-access-f2742" (OuterVolumeSpecName: "kube-api-access-f2742") pod "523f77f6-c829-4d3d-99c1-45bafcb30ee3" (UID: "523f77f6-c829-4d3d-99c1-45bafcb30ee3"). InnerVolumeSpecName "kube-api-access-f2742". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.364098 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/523f77f6-c829-4d3d-99c1-45bafcb30ee3-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "523f77f6-c829-4d3d-99c1-45bafcb30ee3" (UID: "523f77f6-c829-4d3d-99c1-45bafcb30ee3"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.364352 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/523f77f6-c829-4d3d-99c1-45bafcb30ee3-config" (OuterVolumeSpecName: "config") pod "523f77f6-c829-4d3d-99c1-45bafcb30ee3" (UID: "523f77f6-c829-4d3d-99c1-45bafcb30ee3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.364678 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/523f77f6-c829-4d3d-99c1-45bafcb30ee3-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "523f77f6-c829-4d3d-99c1-45bafcb30ee3" (UID: "523f77f6-c829-4d3d-99c1-45bafcb30ee3"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.364779 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/523f77f6-c829-4d3d-99c1-45bafcb30ee3-config-out" (OuterVolumeSpecName: "config-out") pod "523f77f6-c829-4d3d-99c1-45bafcb30ee3" (UID: "523f77f6-c829-4d3d-99c1-45bafcb30ee3"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.387254 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/523f77f6-c829-4d3d-99c1-45bafcb30ee3-web-config" (OuterVolumeSpecName: "web-config") pod "523f77f6-c829-4d3d-99c1-45bafcb30ee3" (UID: "523f77f6-c829-4d3d-99c1-45bafcb30ee3"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.387628 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "523f77f6-c829-4d3d-99c1-45bafcb30ee3" (UID: "523f77f6-c829-4d3d-99c1-45bafcb30ee3"). InnerVolumeSpecName "pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.435161 4870 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/523f77f6-c829-4d3d-99c1-45bafcb30ee3-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.435198 4870 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/523f77f6-c829-4d3d-99c1-45bafcb30ee3-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.435211 4870 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/523f77f6-c829-4d3d-99c1-45bafcb30ee3-web-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.435221 4870 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/523f77f6-c829-4d3d-99c1-45bafcb30ee3-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.435232 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/523f77f6-c829-4d3d-99c1-45bafcb30ee3-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.435242 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2742\" (UniqueName: \"kubernetes.io/projected/523f77f6-c829-4d3d-99c1-45bafcb30ee3-kube-api-access-f2742\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.435281 4870 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6\") on node \"crc\" " Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.435292 4870 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/523f77f6-c829-4d3d-99c1-45bafcb30ee3-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.435496 4870 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/523f77f6-c829-4d3d-99c1-45bafcb30ee3-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.435504 4870 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/523f77f6-c829-4d3d-99c1-45bafcb30ee3-config-out\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.454688 4870 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.454838 4870 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6") on node "crc" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.537475 4870 reconciler_common.go:293] "Volume detached for volume \"pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.554870 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ktsg2-config-wfrgb" event={"ID":"c70a0819-992e-4dd0-96b8-5970678cca52","Type":"ContainerDied","Data":"7fe573f38c7d208ac95003dcdbfd7c8c05911f656faeba4bc06d57990f9a7339"} Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.554917 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fe573f38c7d208ac95003dcdbfd7c8c05911f656faeba4bc06d57990f9a7339" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.555032 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ktsg2-config-wfrgb" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.559893 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"523f77f6-c829-4d3d-99c1-45bafcb30ee3","Type":"ContainerDied","Data":"43bfeefee6f1e3c831ed4e4534d7ea9e780830016fa4bc01a91c9ec7f2d0487b"} Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.559936 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.559975 4870 scope.go:117] "RemoveContainer" containerID="f5a882f2f11a92dadabc50e0003aa9d74ba5ba163255892e82d9bb7ddff52a12" Feb 16 17:20:13 crc kubenswrapper[4870]: E0216 17:20:13.580191 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-5r2tl" podUID="998e2386-0941-4f2b-8e23-d77138831ad4" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.649123 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2953-account-create-update-c479c"] Feb 16 17:20:13 crc kubenswrapper[4870]: W0216 17:20:13.655986 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda05594c3_a7c9_4326_83d3_8602b8077b29.slice/crio-f47c2b2ef06bebdc19e5f7b2cad6526acbcb8da8f73f617f42d1c594863fb67f WatchSource:0}: Error finding container f47c2b2ef06bebdc19e5f7b2cad6526acbcb8da8f73f617f42d1c594863fb67f: Status 404 returned error can't find the container with id f47c2b2ef06bebdc19e5f7b2cad6526acbcb8da8f73f617f42d1c594863fb67f Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.681159 4870 scope.go:117] "RemoveContainer" containerID="6184c5ab8567c650caaca098276056713018801c2c0b86f2e11eca261f48e205" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.742368 4870 scope.go:117] "RemoveContainer" containerID="f8ffa535d7520af1c4e297425d560fd1a7020c5dc0ea84661efce87959918cc2" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.743212 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.756200 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.773744 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 17:20:13 crc kubenswrapper[4870]: E0216 17:20:13.774269 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c70a0819-992e-4dd0-96b8-5970678cca52" containerName="ovn-config" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.774287 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="c70a0819-992e-4dd0-96b8-5970678cca52" containerName="ovn-config" Feb 16 17:20:13 crc kubenswrapper[4870]: E0216 17:20:13.774305 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="523f77f6-c829-4d3d-99c1-45bafcb30ee3" containerName="init-config-reloader" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.774313 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="523f77f6-c829-4d3d-99c1-45bafcb30ee3" containerName="init-config-reloader" Feb 16 17:20:13 crc kubenswrapper[4870]: E0216 17:20:13.774325 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="523f77f6-c829-4d3d-99c1-45bafcb30ee3" containerName="prometheus" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.774332 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="523f77f6-c829-4d3d-99c1-45bafcb30ee3" containerName="prometheus" Feb 16 17:20:13 crc kubenswrapper[4870]: E0216 17:20:13.774352 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="523f77f6-c829-4d3d-99c1-45bafcb30ee3" containerName="config-reloader" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.774361 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="523f77f6-c829-4d3d-99c1-45bafcb30ee3" containerName="config-reloader" Feb 16 17:20:13 crc kubenswrapper[4870]: E0216 17:20:13.774387 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="523f77f6-c829-4d3d-99c1-45bafcb30ee3" containerName="thanos-sidecar" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.774393 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="523f77f6-c829-4d3d-99c1-45bafcb30ee3" containerName="thanos-sidecar" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.774630 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="523f77f6-c829-4d3d-99c1-45bafcb30ee3" containerName="config-reloader" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.774658 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="c70a0819-992e-4dd0-96b8-5970678cca52" containerName="ovn-config" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.774670 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="523f77f6-c829-4d3d-99c1-45bafcb30ee3" containerName="thanos-sidecar" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.774692 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="523f77f6-c829-4d3d-99c1-45bafcb30ee3" containerName="prometheus" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.777373 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.781936 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.782129 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.784299 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.784515 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.785300 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-zxr4b" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.785572 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.785792 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.786250 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.787049 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.791446 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.806013 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ktsg2-config-wfrgb"] Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.820685 4870 scope.go:117] "RemoveContainer" containerID="3f00fdf72c01fcfa772f386ad93e343124a9ed164f27f4e1851ac3ab6b7344e6" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.825283 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ktsg2-config-wfrgb"] Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.843698 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39dde5ae-2522-43c8-a0e0-9e257052bab6-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.843740 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/39dde5ae-2522-43c8-a0e0-9e257052bab6-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.843762 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/39dde5ae-2522-43c8-a0e0-9e257052bab6-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.843788 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/39dde5ae-2522-43c8-a0e0-9e257052bab6-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.843805 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/39dde5ae-2522-43c8-a0e0-9e257052bab6-config\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.843824 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsc6m\" (UniqueName: \"kubernetes.io/projected/39dde5ae-2522-43c8-a0e0-9e257052bab6-kube-api-access-bsc6m\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.843865 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/39dde5ae-2522-43c8-a0e0-9e257052bab6-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.843885 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/39dde5ae-2522-43c8-a0e0-9e257052bab6-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.843933 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/39dde5ae-2522-43c8-a0e0-9e257052bab6-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.843994 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/39dde5ae-2522-43c8-a0e0-9e257052bab6-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.844036 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/39dde5ae-2522-43c8-a0e0-9e257052bab6-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.844071 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/39dde5ae-2522-43c8-a0e0-9e257052bab6-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.844109 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.873080 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ktsg2-config-wn2rt"] Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.874238 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.881686 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.905136 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ktsg2-config-wn2rt"] Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.948252 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-var-run\") pod \"ovn-controller-ktsg2-config-wn2rt\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.948288 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-var-run-ovn\") pod \"ovn-controller-ktsg2-config-wn2rt\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.948332 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.948355 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-scripts\") pod \"ovn-controller-ktsg2-config-wn2rt\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.948374 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-additional-scripts\") pod \"ovn-controller-ktsg2-config-wn2rt\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.948416 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39dde5ae-2522-43c8-a0e0-9e257052bab6-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.948438 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/39dde5ae-2522-43c8-a0e0-9e257052bab6-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.948457 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/39dde5ae-2522-43c8-a0e0-9e257052bab6-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.948484 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/39dde5ae-2522-43c8-a0e0-9e257052bab6-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.948503 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/39dde5ae-2522-43c8-a0e0-9e257052bab6-config\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.948526 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsc6m\" (UniqueName: \"kubernetes.io/projected/39dde5ae-2522-43c8-a0e0-9e257052bab6-kube-api-access-bsc6m\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.948571 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/39dde5ae-2522-43c8-a0e0-9e257052bab6-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.948591 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/39dde5ae-2522-43c8-a0e0-9e257052bab6-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.948637 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/39dde5ae-2522-43c8-a0e0-9e257052bab6-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.948657 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/39dde5ae-2522-43c8-a0e0-9e257052bab6-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.948675 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-var-log-ovn\") pod \"ovn-controller-ktsg2-config-wn2rt\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.948705 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgj9n\" (UniqueName: \"kubernetes.io/projected/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-kube-api-access-lgj9n\") pod \"ovn-controller-ktsg2-config-wn2rt\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.948723 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/39dde5ae-2522-43c8-a0e0-9e257052bab6-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.948750 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/39dde5ae-2522-43c8-a0e0-9e257052bab6-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.954254 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/39dde5ae-2522-43c8-a0e0-9e257052bab6-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.954416 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/39dde5ae-2522-43c8-a0e0-9e257052bab6-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.955027 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/39dde5ae-2522-43c8-a0e0-9e257052bab6-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.955763 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/39dde5ae-2522-43c8-a0e0-9e257052bab6-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.956140 4870 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.956180 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/db39f485af21a79151032c6fa9f638ff58e4b7e89021845f15a51ead92dc9627/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.958097 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/39dde5ae-2522-43c8-a0e0-9e257052bab6-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.960164 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/39dde5ae-2522-43c8-a0e0-9e257052bab6-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.960581 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39dde5ae-2522-43c8-a0e0-9e257052bab6-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.961704 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/39dde5ae-2522-43c8-a0e0-9e257052bab6-config\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.963283 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/39dde5ae-2522-43c8-a0e0-9e257052bab6-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.967790 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/39dde5ae-2522-43c8-a0e0-9e257052bab6-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.969709 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/39dde5ae-2522-43c8-a0e0-9e257052bab6-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:13 crc kubenswrapper[4870]: I0216 17:20:13.973076 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsc6m\" (UniqueName: \"kubernetes.io/projected/39dde5ae-2522-43c8-a0e0-9e257052bab6-kube-api-access-bsc6m\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.007923 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5bd7f6a-62a3-4d2b-bf31-e327e419d8d6\") pod \"prometheus-metric-storage-0\" (UID: \"39dde5ae-2522-43c8-a0e0-9e257052bab6\") " pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.050212 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-scripts\") pod \"ovn-controller-ktsg2-config-wn2rt\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.050286 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-additional-scripts\") pod \"ovn-controller-ktsg2-config-wn2rt\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.050459 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-var-log-ovn\") pod \"ovn-controller-ktsg2-config-wn2rt\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.050495 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgj9n\" (UniqueName: \"kubernetes.io/projected/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-kube-api-access-lgj9n\") pod \"ovn-controller-ktsg2-config-wn2rt\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.050531 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-var-run\") pod \"ovn-controller-ktsg2-config-wn2rt\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.050548 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-var-run-ovn\") pod \"ovn-controller-ktsg2-config-wn2rt\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.050846 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-var-run-ovn\") pod \"ovn-controller-ktsg2-config-wn2rt\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.051586 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-additional-scripts\") pod \"ovn-controller-ktsg2-config-wn2rt\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.051643 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-var-log-ovn\") pod \"ovn-controller-ktsg2-config-wn2rt\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.051929 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-var-run\") pod \"ovn-controller-ktsg2-config-wn2rt\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.052281 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-scripts\") pod \"ovn-controller-ktsg2-config-wn2rt\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.067913 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgj9n\" (UniqueName: \"kubernetes.io/projected/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-kube-api-access-lgj9n\") pod \"ovn-controller-ktsg2-config-wn2rt\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.095405 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kq5cz"] Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.103890 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="d158e8d5-206e-4289-a1e5-247fddf29a11" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.114769 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-m2l86"] Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.130474 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-9txqv"] Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.134491 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.165014 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8204-account-create-update-96fmd"] Feb 16 17:20:14 crc kubenswrapper[4870]: W0216 17:20:14.170912 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70ee1b4f_d0ae_44c6_89fa_03712279a648.slice/crio-dbd44dc60c157d9002d3a52463ad3afd43ed233d7bb09df556275ce58bcde3c0 WatchSource:0}: Error finding container dbd44dc60c157d9002d3a52463ad3afd43ed233d7bb09df556275ce58bcde3c0: Status 404 returned error can't find the container with id dbd44dc60c157d9002d3a52463ad3afd43ed233d7bb09df556275ce58bcde3c0 Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.185802 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x74v2"] Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.207482 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dfjd4"] Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.216206 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:14 crc kubenswrapper[4870]: W0216 17:20:14.220531 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod248e9264_b0ec_412b_aa16_0c3869d5f245.slice/crio-2a247dff549e4e0ed2ab02b276e268a29dee7d37e179f7dddaeb015a0b0edb8e WatchSource:0}: Error finding container 2a247dff549e4e0ed2ab02b276e268a29dee7d37e179f7dddaeb015a0b0edb8e: Status 404 returned error can't find the container with id 2a247dff549e4e0ed2ab02b276e268a29dee7d37e179f7dddaeb015a0b0edb8e Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.221247 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9496-account-create-update-vvscs"] Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.244694 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="523f77f6-c829-4d3d-99c1-45bafcb30ee3" path="/var/lib/kubelet/pods/523f77f6-c829-4d3d-99c1-45bafcb30ee3/volumes" Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.245905 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c70a0819-992e-4dd0-96b8-5970678cca52" path="/var/lib/kubelet/pods/c70a0819-992e-4dd0-96b8-5970678cca52/volumes" Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.246822 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-93e7-account-create-update-54hdz"] Feb 16 17:20:14 crc kubenswrapper[4870]: W0216 17:20:14.247191 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c8030e8_0df0_478c_83b8_2144a0402358.slice/crio-b1c07fb4bb183fca81d22b308f41f2deb27f72ff013dfca6a1264031b057e236 WatchSource:0}: Error finding container b1c07fb4bb183fca81d22b308f41f2deb27f72ff013dfca6a1264031b057e236: Status 404 returned error can't find the container with id b1c07fb4bb183fca81d22b308f41f2deb27f72ff013dfca6a1264031b057e236 Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.297731 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.576248 4870 generic.go:334] "Generic (PLEG): container finished" podID="3a5551a3-6fd8-479b-ad29-488078cc5ad1" containerID="c4eb87cdfbe2c84d44ae4f5bfb73ed4f19e3b389a7bb4ebfdcf82c640f76c0b2" exitCode=0 Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.576427 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kq5cz" event={"ID":"3a5551a3-6fd8-479b-ad29-488078cc5ad1","Type":"ContainerDied","Data":"c4eb87cdfbe2c84d44ae4f5bfb73ed4f19e3b389a7bb4ebfdcf82c640f76c0b2"} Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.576493 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kq5cz" event={"ID":"3a5551a3-6fd8-479b-ad29-488078cc5ad1","Type":"ContainerStarted","Data":"7e762828193eb0e2461eb4bf193a747a20e16757808a0ec991ee16c0a4c8ccb4"} Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.577453 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9496-account-create-update-vvscs" event={"ID":"7b165ae4-b42a-4351-8bde-e88b7fa65137","Type":"ContainerStarted","Data":"5b27194757337d46d3781d00455c67bcb1da1bb798692a87d34d70aa627a0d2d"} Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.579018 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x74v2" event={"ID":"248e9264-b0ec-412b-aa16-0c3869d5f245","Type":"ContainerStarted","Data":"2a247dff549e4e0ed2ab02b276e268a29dee7d37e179f7dddaeb015a0b0edb8e"} Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.590310 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m2l86" event={"ID":"387afd6f-c438-44b0-ba75-db7c0ecd911b","Type":"ContainerStarted","Data":"d1ab562c0cfd6ac7977bea4748ed668dd646fb38dd9ae0a02e182c84fd7f5276"} Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.590357 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m2l86" event={"ID":"387afd6f-c438-44b0-ba75-db7c0ecd911b","Type":"ContainerStarted","Data":"91d9cfbf763de508bd294a9ad13d43890ff755b36aa1d0a6e38112d10124f647"} Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.598294 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-93e7-account-create-update-54hdz" event={"ID":"0aaab981-61b9-43df-a72c-8543ad202980","Type":"ContainerStarted","Data":"db99c329aef9c80d04bd1ca962186ab576a85532a42ce0f81df55221d705a5fb"} Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.601663 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"669a24d2-3e17-4ce1-aba2-c45d2a92683a","Type":"ContainerStarted","Data":"0d1effcdef193901316fece91f0c037d5720d848308c7b4c99f24197383f952f"} Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.612451 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dfjd4" event={"ID":"9c8030e8-0df0-478c-83b8-2144a0402358","Type":"ContainerStarted","Data":"2c35b26d7811f92ff1318317667c229c8b097ce669520923a326520d13548b7b"} Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.614064 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dfjd4" event={"ID":"9c8030e8-0df0-478c-83b8-2144a0402358","Type":"ContainerStarted","Data":"b1c07fb4bb183fca81d22b308f41f2deb27f72ff013dfca6a1264031b057e236"} Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.631393 4870 generic.go:334] "Generic (PLEG): container finished" podID="a05594c3-a7c9-4326-83d3-8602b8077b29" containerID="088d4deb00e2be749a3a4320a97cd43fd3e799d4cdac734ca4cf94854cca8a4e" exitCode=0 Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.631466 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2953-account-create-update-c479c" event={"ID":"a05594c3-a7c9-4326-83d3-8602b8077b29","Type":"ContainerDied","Data":"088d4deb00e2be749a3a4320a97cd43fd3e799d4cdac734ca4cf94854cca8a4e"} Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.631492 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2953-account-create-update-c479c" event={"ID":"a05594c3-a7c9-4326-83d3-8602b8077b29","Type":"ContainerStarted","Data":"f47c2b2ef06bebdc19e5f7b2cad6526acbcb8da8f73f617f42d1c594863fb67f"} Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.635615 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-9txqv" event={"ID":"7efe4b83-3d8c-4e4a-a7e0-0bbccb160506","Type":"ContainerStarted","Data":"e7ff58a46ba14d759833386122a7230b205cdc58ce52a1221f1f4038bd096973"} Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.635666 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-9txqv" event={"ID":"7efe4b83-3d8c-4e4a-a7e0-0bbccb160506","Type":"ContainerStarted","Data":"98a2251d0e3abc1e27117f4b9821ab48281db05d5b4537c3a5485bf944a70638"} Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.641662 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8204-account-create-update-96fmd" event={"ID":"70ee1b4f-d0ae-44c6-89fa-03712279a648","Type":"ContainerStarted","Data":"dbd44dc60c157d9002d3a52463ad3afd43ed233d7bb09df556275ce58bcde3c0"} Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.643554 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-m2l86" podStartSLOduration=7.643535449 podStartE2EDuration="7.643535449s" podCreationTimestamp="2026-02-16 17:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:20:14.612201158 +0000 UTC m=+1219.095665552" watchObservedRunningTime="2026-02-16 17:20:14.643535449 +0000 UTC m=+1219.126999833" Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.662613 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-dfjd4" podStartSLOduration=7.662591485 podStartE2EDuration="7.662591485s" podCreationTimestamp="2026-02-16 17:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:20:14.639492676 +0000 UTC m=+1219.122957060" watchObservedRunningTime="2026-02-16 17:20:14.662591485 +0000 UTC m=+1219.146055869" Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.706967 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-create-9txqv" podStartSLOduration=7.706926183 podStartE2EDuration="7.706926183s" podCreationTimestamp="2026-02-16 17:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:20:14.677524766 +0000 UTC m=+1219.160989170" watchObservedRunningTime="2026-02-16 17:20:14.706926183 +0000 UTC m=+1219.190390567" Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.758515 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 17:20:14 crc kubenswrapper[4870]: I0216 17:20:14.852876 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ktsg2-config-wn2rt"] Feb 16 17:20:14 crc kubenswrapper[4870]: W0216 17:20:14.881120 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb867ab55_eb6f_4a2e_9ed6_a4fbfb7ba93c.slice/crio-a88fe9b928cef3ff9ccb48dd2e2ac950d06cba00071c9a5c00bf2b2f56598fe9 WatchSource:0}: Error finding container a88fe9b928cef3ff9ccb48dd2e2ac950d06cba00071c9a5c00bf2b2f56598fe9: Status 404 returned error can't find the container with id a88fe9b928cef3ff9ccb48dd2e2ac950d06cba00071c9a5c00bf2b2f56598fe9 Feb 16 17:20:15 crc kubenswrapper[4870]: I0216 17:20:15.655883 4870 generic.go:334] "Generic (PLEG): container finished" podID="9c8030e8-0df0-478c-83b8-2144a0402358" containerID="2c35b26d7811f92ff1318317667c229c8b097ce669520923a326520d13548b7b" exitCode=0 Feb 16 17:20:15 crc kubenswrapper[4870]: I0216 17:20:15.656007 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dfjd4" event={"ID":"9c8030e8-0df0-478c-83b8-2144a0402358","Type":"ContainerDied","Data":"2c35b26d7811f92ff1318317667c229c8b097ce669520923a326520d13548b7b"} Feb 16 17:20:15 crc kubenswrapper[4870]: I0216 17:20:15.658911 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39dde5ae-2522-43c8-a0e0-9e257052bab6","Type":"ContainerStarted","Data":"c8815cece4f2add6f2fe0b6190d50b464902f4df51ab610fd2348277bb4054dd"} Feb 16 17:20:15 crc kubenswrapper[4870]: I0216 17:20:15.661099 4870 generic.go:334] "Generic (PLEG): container finished" podID="7efe4b83-3d8c-4e4a-a7e0-0bbccb160506" containerID="e7ff58a46ba14d759833386122a7230b205cdc58ce52a1221f1f4038bd096973" exitCode=0 Feb 16 17:20:15 crc kubenswrapper[4870]: I0216 17:20:15.661201 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-9txqv" event={"ID":"7efe4b83-3d8c-4e4a-a7e0-0bbccb160506","Type":"ContainerDied","Data":"e7ff58a46ba14d759833386122a7230b205cdc58ce52a1221f1f4038bd096973"} Feb 16 17:20:15 crc kubenswrapper[4870]: I0216 17:20:15.668062 4870 generic.go:334] "Generic (PLEG): container finished" podID="7b165ae4-b42a-4351-8bde-e88b7fa65137" containerID="07001bc8b8a2ae76e78241c9cfb34835feb178c03540945ad4518828f2d8f866" exitCode=0 Feb 16 17:20:15 crc kubenswrapper[4870]: I0216 17:20:15.668101 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9496-account-create-update-vvscs" event={"ID":"7b165ae4-b42a-4351-8bde-e88b7fa65137","Type":"ContainerDied","Data":"07001bc8b8a2ae76e78241c9cfb34835feb178c03540945ad4518828f2d8f866"} Feb 16 17:20:15 crc kubenswrapper[4870]: I0216 17:20:15.670471 4870 generic.go:334] "Generic (PLEG): container finished" podID="70ee1b4f-d0ae-44c6-89fa-03712279a648" containerID="c0c71372013b216ad72c7f4d488af2b32187b99797ebcfea00c982c3d29fd08a" exitCode=0 Feb 16 17:20:15 crc kubenswrapper[4870]: I0216 17:20:15.670545 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8204-account-create-update-96fmd" event={"ID":"70ee1b4f-d0ae-44c6-89fa-03712279a648","Type":"ContainerDied","Data":"c0c71372013b216ad72c7f4d488af2b32187b99797ebcfea00c982c3d29fd08a"} Feb 16 17:20:15 crc kubenswrapper[4870]: I0216 17:20:15.672423 4870 generic.go:334] "Generic (PLEG): container finished" podID="387afd6f-c438-44b0-ba75-db7c0ecd911b" containerID="d1ab562c0cfd6ac7977bea4748ed668dd646fb38dd9ae0a02e182c84fd7f5276" exitCode=0 Feb 16 17:20:15 crc kubenswrapper[4870]: I0216 17:20:15.672546 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m2l86" event={"ID":"387afd6f-c438-44b0-ba75-db7c0ecd911b","Type":"ContainerDied","Data":"d1ab562c0cfd6ac7977bea4748ed668dd646fb38dd9ae0a02e182c84fd7f5276"} Feb 16 17:20:15 crc kubenswrapper[4870]: I0216 17:20:15.674240 4870 generic.go:334] "Generic (PLEG): container finished" podID="b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c" containerID="70f68f4e42fe70ef1a7b5803bbda57bae816fb675efd42da2e3b194020e7318a" exitCode=0 Feb 16 17:20:15 crc kubenswrapper[4870]: I0216 17:20:15.674297 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ktsg2-config-wn2rt" event={"ID":"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c","Type":"ContainerDied","Data":"70f68f4e42fe70ef1a7b5803bbda57bae816fb675efd42da2e3b194020e7318a"} Feb 16 17:20:15 crc kubenswrapper[4870]: I0216 17:20:15.674323 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ktsg2-config-wn2rt" event={"ID":"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c","Type":"ContainerStarted","Data":"a88fe9b928cef3ff9ccb48dd2e2ac950d06cba00071c9a5c00bf2b2f56598fe9"} Feb 16 17:20:15 crc kubenswrapper[4870]: I0216 17:20:15.682110 4870 generic.go:334] "Generic (PLEG): container finished" podID="0aaab981-61b9-43df-a72c-8543ad202980" containerID="910b06325791e2333278b2e4bcc64593d2af14b61cf3622c10f5a89387139721" exitCode=0 Feb 16 17:20:15 crc kubenswrapper[4870]: I0216 17:20:15.682456 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-93e7-account-create-update-54hdz" event={"ID":"0aaab981-61b9-43df-a72c-8543ad202980","Type":"ContainerDied","Data":"910b06325791e2333278b2e4bcc64593d2af14b61cf3622c10f5a89387139721"} Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.069980 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2953-account-create-update-c479c" Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.077546 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kq5cz" Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.136443 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdrwj\" (UniqueName: \"kubernetes.io/projected/3a5551a3-6fd8-479b-ad29-488078cc5ad1-kube-api-access-xdrwj\") pod \"3a5551a3-6fd8-479b-ad29-488078cc5ad1\" (UID: \"3a5551a3-6fd8-479b-ad29-488078cc5ad1\") " Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.136861 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsm22\" (UniqueName: \"kubernetes.io/projected/a05594c3-a7c9-4326-83d3-8602b8077b29-kube-api-access-lsm22\") pod \"a05594c3-a7c9-4326-83d3-8602b8077b29\" (UID: \"a05594c3-a7c9-4326-83d3-8602b8077b29\") " Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.136910 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a5551a3-6fd8-479b-ad29-488078cc5ad1-operator-scripts\") pod \"3a5551a3-6fd8-479b-ad29-488078cc5ad1\" (UID: \"3a5551a3-6fd8-479b-ad29-488078cc5ad1\") " Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.136989 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a05594c3-a7c9-4326-83d3-8602b8077b29-operator-scripts\") pod \"a05594c3-a7c9-4326-83d3-8602b8077b29\" (UID: \"a05594c3-a7c9-4326-83d3-8602b8077b29\") " Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.137996 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a5551a3-6fd8-479b-ad29-488078cc5ad1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3a5551a3-6fd8-479b-ad29-488078cc5ad1" (UID: "3a5551a3-6fd8-479b-ad29-488078cc5ad1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.138272 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a05594c3-a7c9-4326-83d3-8602b8077b29-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a05594c3-a7c9-4326-83d3-8602b8077b29" (UID: "a05594c3-a7c9-4326-83d3-8602b8077b29"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.138861 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a5551a3-6fd8-479b-ad29-488078cc5ad1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.138894 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a05594c3-a7c9-4326-83d3-8602b8077b29-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.140670 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a05594c3-a7c9-4326-83d3-8602b8077b29-kube-api-access-lsm22" (OuterVolumeSpecName: "kube-api-access-lsm22") pod "a05594c3-a7c9-4326-83d3-8602b8077b29" (UID: "a05594c3-a7c9-4326-83d3-8602b8077b29"). InnerVolumeSpecName "kube-api-access-lsm22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.142043 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a5551a3-6fd8-479b-ad29-488078cc5ad1-kube-api-access-xdrwj" (OuterVolumeSpecName: "kube-api-access-xdrwj") pod "3a5551a3-6fd8-479b-ad29-488078cc5ad1" (UID: "3a5551a3-6fd8-479b-ad29-488078cc5ad1"). InnerVolumeSpecName "kube-api-access-xdrwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.240051 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdrwj\" (UniqueName: \"kubernetes.io/projected/3a5551a3-6fd8-479b-ad29-488078cc5ad1-kube-api-access-xdrwj\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.240081 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsm22\" (UniqueName: \"kubernetes.io/projected/a05594c3-a7c9-4326-83d3-8602b8077b29-kube-api-access-lsm22\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.692370 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2953-account-create-update-c479c" event={"ID":"a05594c3-a7c9-4326-83d3-8602b8077b29","Type":"ContainerDied","Data":"f47c2b2ef06bebdc19e5f7b2cad6526acbcb8da8f73f617f42d1c594863fb67f"} Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.692407 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f47c2b2ef06bebdc19e5f7b2cad6526acbcb8da8f73f617f42d1c594863fb67f" Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.692457 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2953-account-create-update-c479c" Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.696890 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"669a24d2-3e17-4ce1-aba2-c45d2a92683a","Type":"ContainerStarted","Data":"59a88a5da607b11fb0b7b49d5eb76b6ca4d9db4b5f306c25d9cdfdd577b40289"} Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.696922 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"669a24d2-3e17-4ce1-aba2-c45d2a92683a","Type":"ContainerStarted","Data":"cba7e172acdf1fab154f85be3268795322256582628345bf8b86704a59924828"} Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.696931 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"669a24d2-3e17-4ce1-aba2-c45d2a92683a","Type":"ContainerStarted","Data":"165a535a2baa5cf74dd27b442451f126873a9f62f8a27ab84701aa13435e8a48"} Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.698797 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kq5cz" Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.698834 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kq5cz" event={"ID":"3a5551a3-6fd8-479b-ad29-488078cc5ad1","Type":"ContainerDied","Data":"7e762828193eb0e2461eb4bf193a747a20e16757808a0ec991ee16c0a4c8ccb4"} Feb 16 17:20:16 crc kubenswrapper[4870]: I0216 17:20:16.698862 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e762828193eb0e2461eb4bf193a747a20e16757808a0ec991ee16c0a4c8ccb4" Feb 16 17:20:17 crc kubenswrapper[4870]: I0216 17:20:17.710668 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"669a24d2-3e17-4ce1-aba2-c45d2a92683a","Type":"ContainerStarted","Data":"492831447f086afb0e8562414f75ec571023648dc9304920dd9e1441445d4a06"} Feb 16 17:20:17 crc kubenswrapper[4870]: I0216 17:20:17.713130 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39dde5ae-2522-43c8-a0e0-9e257052bab6","Type":"ContainerStarted","Data":"ec58dcc7df0afc5cd363aa3aacb089656eb0ef2a6ec1a4230728bbc6888d5af9"} Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.662223 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-93e7-account-create-update-54hdz" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.677052 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8204-account-create-update-96fmd" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.688977 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dfjd4" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.708527 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9496-account-create-update-vvscs" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.717723 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4z9k\" (UniqueName: \"kubernetes.io/projected/0aaab981-61b9-43df-a72c-8543ad202980-kube-api-access-l4z9k\") pod \"0aaab981-61b9-43df-a72c-8543ad202980\" (UID: \"0aaab981-61b9-43df-a72c-8543ad202980\") " Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.717781 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f66mz\" (UniqueName: \"kubernetes.io/projected/70ee1b4f-d0ae-44c6-89fa-03712279a648-kube-api-access-f66mz\") pod \"70ee1b4f-d0ae-44c6-89fa-03712279a648\" (UID: \"70ee1b4f-d0ae-44c6-89fa-03712279a648\") " Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.717806 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c8030e8-0df0-478c-83b8-2144a0402358-operator-scripts\") pod \"9c8030e8-0df0-478c-83b8-2144a0402358\" (UID: \"9c8030e8-0df0-478c-83b8-2144a0402358\") " Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.717847 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0aaab981-61b9-43df-a72c-8543ad202980-operator-scripts\") pod \"0aaab981-61b9-43df-a72c-8543ad202980\" (UID: \"0aaab981-61b9-43df-a72c-8543ad202980\") " Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.717919 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70ee1b4f-d0ae-44c6-89fa-03712279a648-operator-scripts\") pod \"70ee1b4f-d0ae-44c6-89fa-03712279a648\" (UID: \"70ee1b4f-d0ae-44c6-89fa-03712279a648\") " Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.718047 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wwn5\" (UniqueName: \"kubernetes.io/projected/9c8030e8-0df0-478c-83b8-2144a0402358-kube-api-access-6wwn5\") pod \"9c8030e8-0df0-478c-83b8-2144a0402358\" (UID: \"9c8030e8-0df0-478c-83b8-2144a0402358\") " Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.719873 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c8030e8-0df0-478c-83b8-2144a0402358-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c8030e8-0df0-478c-83b8-2144a0402358" (UID: "9c8030e8-0df0-478c-83b8-2144a0402358"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.719936 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aaab981-61b9-43df-a72c-8543ad202980-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0aaab981-61b9-43df-a72c-8543ad202980" (UID: "0aaab981-61b9-43df-a72c-8543ad202980"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.720366 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c8030e8-0df0-478c-83b8-2144a0402358-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.720386 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0aaab981-61b9-43df-a72c-8543ad202980-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.720470 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70ee1b4f-d0ae-44c6-89fa-03712279a648-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70ee1b4f-d0ae-44c6-89fa-03712279a648" (UID: "70ee1b4f-d0ae-44c6-89fa-03712279a648"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.729506 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70ee1b4f-d0ae-44c6-89fa-03712279a648-kube-api-access-f66mz" (OuterVolumeSpecName: "kube-api-access-f66mz") pod "70ee1b4f-d0ae-44c6-89fa-03712279a648" (UID: "70ee1b4f-d0ae-44c6-89fa-03712279a648"). InnerVolumeSpecName "kube-api-access-f66mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.730580 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aaab981-61b9-43df-a72c-8543ad202980-kube-api-access-l4z9k" (OuterVolumeSpecName: "kube-api-access-l4z9k") pod "0aaab981-61b9-43df-a72c-8543ad202980" (UID: "0aaab981-61b9-43df-a72c-8543ad202980"). InnerVolumeSpecName "kube-api-access-l4z9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.733248 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c8030e8-0df0-478c-83b8-2144a0402358-kube-api-access-6wwn5" (OuterVolumeSpecName: "kube-api-access-6wwn5") pod "9c8030e8-0df0-478c-83b8-2144a0402358" (UID: "9c8030e8-0df0-478c-83b8-2144a0402358"). InnerVolumeSpecName "kube-api-access-6wwn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.733672 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m2l86" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.738975 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-9txqv" event={"ID":"7efe4b83-3d8c-4e4a-a7e0-0bbccb160506","Type":"ContainerDied","Data":"98a2251d0e3abc1e27117f4b9821ab48281db05d5b4537c3a5485bf944a70638"} Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.739012 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98a2251d0e3abc1e27117f4b9821ab48281db05d5b4537c3a5485bf944a70638" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.741405 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9496-account-create-update-vvscs" event={"ID":"7b165ae4-b42a-4351-8bde-e88b7fa65137","Type":"ContainerDied","Data":"5b27194757337d46d3781d00455c67bcb1da1bb798692a87d34d70aa627a0d2d"} Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.741424 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b27194757337d46d3781d00455c67bcb1da1bb798692a87d34d70aa627a0d2d" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.741480 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9496-account-create-update-vvscs" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.744422 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8204-account-create-update-96fmd" event={"ID":"70ee1b4f-d0ae-44c6-89fa-03712279a648","Type":"ContainerDied","Data":"dbd44dc60c157d9002d3a52463ad3afd43ed233d7bb09df556275ce58bcde3c0"} Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.744439 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbd44dc60c157d9002d3a52463ad3afd43ed233d7bb09df556275ce58bcde3c0" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.744477 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8204-account-create-update-96fmd" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.756327 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m2l86" event={"ID":"387afd6f-c438-44b0-ba75-db7c0ecd911b","Type":"ContainerDied","Data":"91d9cfbf763de508bd294a9ad13d43890ff755b36aa1d0a6e38112d10124f647"} Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.756375 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91d9cfbf763de508bd294a9ad13d43890ff755b36aa1d0a6e38112d10124f647" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.756466 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m2l86" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.759188 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ktsg2-config-wn2rt" event={"ID":"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c","Type":"ContainerDied","Data":"a88fe9b928cef3ff9ccb48dd2e2ac950d06cba00071c9a5c00bf2b2f56598fe9"} Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.759211 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a88fe9b928cef3ff9ccb48dd2e2ac950d06cba00071c9a5c00bf2b2f56598fe9" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.760436 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-93e7-account-create-update-54hdz" event={"ID":"0aaab981-61b9-43df-a72c-8543ad202980","Type":"ContainerDied","Data":"db99c329aef9c80d04bd1ca962186ab576a85532a42ce0f81df55221d705a5fb"} Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.760457 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db99c329aef9c80d04bd1ca962186ab576a85532a42ce0f81df55221d705a5fb" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.760512 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-93e7-account-create-update-54hdz" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.763080 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dfjd4" event={"ID":"9c8030e8-0df0-478c-83b8-2144a0402358","Type":"ContainerDied","Data":"b1c07fb4bb183fca81d22b308f41f2deb27f72ff013dfca6a1264031b057e236"} Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.763123 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1c07fb4bb183fca81d22b308f41f2deb27f72ff013dfca6a1264031b057e236" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.763201 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dfjd4" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.821712 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b165ae4-b42a-4351-8bde-e88b7fa65137-operator-scripts\") pod \"7b165ae4-b42a-4351-8bde-e88b7fa65137\" (UID: \"7b165ae4-b42a-4351-8bde-e88b7fa65137\") " Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.821808 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wf4p\" (UniqueName: \"kubernetes.io/projected/7b165ae4-b42a-4351-8bde-e88b7fa65137-kube-api-access-4wf4p\") pod \"7b165ae4-b42a-4351-8bde-e88b7fa65137\" (UID: \"7b165ae4-b42a-4351-8bde-e88b7fa65137\") " Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.822089 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdz2x\" (UniqueName: \"kubernetes.io/projected/387afd6f-c438-44b0-ba75-db7c0ecd911b-kube-api-access-rdz2x\") pod \"387afd6f-c438-44b0-ba75-db7c0ecd911b\" (UID: \"387afd6f-c438-44b0-ba75-db7c0ecd911b\") " Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.824275 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b165ae4-b42a-4351-8bde-e88b7fa65137-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7b165ae4-b42a-4351-8bde-e88b7fa65137" (UID: "7b165ae4-b42a-4351-8bde-e88b7fa65137"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.824580 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b165ae4-b42a-4351-8bde-e88b7fa65137-kube-api-access-4wf4p" (OuterVolumeSpecName: "kube-api-access-4wf4p") pod "7b165ae4-b42a-4351-8bde-e88b7fa65137" (UID: "7b165ae4-b42a-4351-8bde-e88b7fa65137"). InnerVolumeSpecName "kube-api-access-4wf4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.825258 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/387afd6f-c438-44b0-ba75-db7c0ecd911b-operator-scripts\") pod \"387afd6f-c438-44b0-ba75-db7c0ecd911b\" (UID: \"387afd6f-c438-44b0-ba75-db7c0ecd911b\") " Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.826156 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wwn5\" (UniqueName: \"kubernetes.io/projected/9c8030e8-0df0-478c-83b8-2144a0402358-kube-api-access-6wwn5\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.826177 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4z9k\" (UniqueName: \"kubernetes.io/projected/0aaab981-61b9-43df-a72c-8543ad202980-kube-api-access-l4z9k\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.826191 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f66mz\" (UniqueName: \"kubernetes.io/projected/70ee1b4f-d0ae-44c6-89fa-03712279a648-kube-api-access-f66mz\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.826204 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b165ae4-b42a-4351-8bde-e88b7fa65137-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.826217 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70ee1b4f-d0ae-44c6-89fa-03712279a648-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.826313 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/387afd6f-c438-44b0-ba75-db7c0ecd911b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "387afd6f-c438-44b0-ba75-db7c0ecd911b" (UID: "387afd6f-c438-44b0-ba75-db7c0ecd911b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.826387 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/387afd6f-c438-44b0-ba75-db7c0ecd911b-kube-api-access-rdz2x" (OuterVolumeSpecName: "kube-api-access-rdz2x") pod "387afd6f-c438-44b0-ba75-db7c0ecd911b" (UID: "387afd6f-c438-44b0-ba75-db7c0ecd911b"). InnerVolumeSpecName "kube-api-access-rdz2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.829015 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wf4p\" (UniqueName: \"kubernetes.io/projected/7b165ae4-b42a-4351-8bde-e88b7fa65137-kube-api-access-4wf4p\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.861009 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-9txqv" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.863988 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.930731 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-var-run\") pod \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.930873 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-var-run" (OuterVolumeSpecName: "var-run") pod "b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c" (UID: "b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.930896 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp8dg\" (UniqueName: \"kubernetes.io/projected/7efe4b83-3d8c-4e4a-a7e0-0bbccb160506-kube-api-access-hp8dg\") pod \"7efe4b83-3d8c-4e4a-a7e0-0bbccb160506\" (UID: \"7efe4b83-3d8c-4e4a-a7e0-0bbccb160506\") " Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.931050 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7efe4b83-3d8c-4e4a-a7e0-0bbccb160506-operator-scripts\") pod \"7efe4b83-3d8c-4e4a-a7e0-0bbccb160506\" (UID: \"7efe4b83-3d8c-4e4a-a7e0-0bbccb160506\") " Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.931094 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgj9n\" (UniqueName: \"kubernetes.io/projected/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-kube-api-access-lgj9n\") pod \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.931142 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-scripts\") pod \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.931169 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-var-run-ovn\") pod \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.931188 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-var-log-ovn\") pod \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.931219 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-additional-scripts\") pod \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\" (UID: \"b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c\") " Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.931740 4870 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.931756 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdz2x\" (UniqueName: \"kubernetes.io/projected/387afd6f-c438-44b0-ba75-db7c0ecd911b-kube-api-access-rdz2x\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.931769 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/387afd6f-c438-44b0-ba75-db7c0ecd911b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.931845 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c" (UID: "b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.931886 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c" (UID: "b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.932107 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7efe4b83-3d8c-4e4a-a7e0-0bbccb160506-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7efe4b83-3d8c-4e4a-a7e0-0bbccb160506" (UID: "7efe4b83-3d8c-4e4a-a7e0-0bbccb160506"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.932717 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c" (UID: "b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.932754 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-scripts" (OuterVolumeSpecName: "scripts") pod "b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c" (UID: "b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.935926 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7efe4b83-3d8c-4e4a-a7e0-0bbccb160506-kube-api-access-hp8dg" (OuterVolumeSpecName: "kube-api-access-hp8dg") pod "7efe4b83-3d8c-4e4a-a7e0-0bbccb160506" (UID: "7efe4b83-3d8c-4e4a-a7e0-0bbccb160506"). InnerVolumeSpecName "kube-api-access-hp8dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:19.936171 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-kube-api-access-lgj9n" (OuterVolumeSpecName: "kube-api-access-lgj9n") pod "b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c" (UID: "b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c"). InnerVolumeSpecName "kube-api-access-lgj9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:20.033658 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7efe4b83-3d8c-4e4a-a7e0-0bbccb160506-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:20.033697 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgj9n\" (UniqueName: \"kubernetes.io/projected/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-kube-api-access-lgj9n\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:20.033715 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:20.033730 4870 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:20.033744 4870 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:20.033766 4870 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:20.033778 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp8dg\" (UniqueName: \"kubernetes.io/projected/7efe4b83-3d8c-4e4a-a7e0-0bbccb160506-kube-api-access-hp8dg\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:20.781288 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x74v2" event={"ID":"248e9264-b0ec-412b-aa16-0c3869d5f245","Type":"ContainerStarted","Data":"1bd04f1d0f661e8bd9a2941989e2c6b92edd63e39b11c93d58af6cb1e82330ad"} Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:20.832149 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ktsg2-config-wn2rt" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:20.833821 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"669a24d2-3e17-4ce1-aba2-c45d2a92683a","Type":"ContainerStarted","Data":"d6a2a2de1d87a249e5c0e734ddcf6c04e557067ad0f8b2c4544817acb96b21fe"} Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:20.833925 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-9txqv" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:20.858059 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-x74v2" podStartSLOduration=8.307106258 podStartE2EDuration="13.85804005s" podCreationTimestamp="2026-02-16 17:20:07 +0000 UTC" firstStartedPulling="2026-02-16 17:20:14.246792857 +0000 UTC m=+1218.730257241" lastFinishedPulling="2026-02-16 17:20:19.797726649 +0000 UTC m=+1224.281191033" observedRunningTime="2026-02-16 17:20:20.816761969 +0000 UTC m=+1225.300226353" watchObservedRunningTime="2026-02-16 17:20:20.85804005 +0000 UTC m=+1225.341504434" Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:20.982755 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ktsg2-config-wn2rt"] Feb 16 17:20:20 crc kubenswrapper[4870]: I0216 17:20:20.992076 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ktsg2-config-wn2rt"] Feb 16 17:20:21 crc kubenswrapper[4870]: I0216 17:20:21.846234 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"669a24d2-3e17-4ce1-aba2-c45d2a92683a","Type":"ContainerStarted","Data":"8e29f1cc1d988d3f5c7484f098df5c15a5d085c70309e594039522ae5aca9647"} Feb 16 17:20:21 crc kubenswrapper[4870]: I0216 17:20:21.846586 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"669a24d2-3e17-4ce1-aba2-c45d2a92683a","Type":"ContainerStarted","Data":"2d6da2a05c443a0bfe739256742986f73efc142fd52bb255caf680b07714243f"} Feb 16 17:20:21 crc kubenswrapper[4870]: I0216 17:20:21.846606 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"669a24d2-3e17-4ce1-aba2-c45d2a92683a","Type":"ContainerStarted","Data":"8b8e0bc8a02612d5ae1f625ed40b49a8f950f5ca14260adcb28691061e172aaf"} Feb 16 17:20:22 crc kubenswrapper[4870]: I0216 17:20:22.237215 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c" path="/var/lib/kubelet/pods/b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c/volumes" Feb 16 17:20:22 crc kubenswrapper[4870]: I0216 17:20:22.869145 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"669a24d2-3e17-4ce1-aba2-c45d2a92683a","Type":"ContainerStarted","Data":"0ada559419d49eb8b6d71484e6811cccddf87e9c4d22013d6f31ac250dd9ca9d"} Feb 16 17:20:23 crc kubenswrapper[4870]: I0216 17:20:23.889963 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"669a24d2-3e17-4ce1-aba2-c45d2a92683a","Type":"ContainerStarted","Data":"ae8be3026d4237d86a3018ce4a123c6e469e874c49d29f100e9d6be964050fc7"} Feb 16 17:20:23 crc kubenswrapper[4870]: I0216 17:20:23.890899 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"669a24d2-3e17-4ce1-aba2-c45d2a92683a","Type":"ContainerStarted","Data":"ad77916319c494183059fa007799dda6e3ff870c14e78fa92437411515c34e0a"} Feb 16 17:20:23 crc kubenswrapper[4870]: I0216 17:20:23.890918 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"669a24d2-3e17-4ce1-aba2-c45d2a92683a","Type":"ContainerStarted","Data":"8d12706e5938dba2eefa54b5e0d69a76e0ea8566d8c352796635b9327ea07065"} Feb 16 17:20:23 crc kubenswrapper[4870]: I0216 17:20:23.894274 4870 generic.go:334] "Generic (PLEG): container finished" podID="39dde5ae-2522-43c8-a0e0-9e257052bab6" containerID="ec58dcc7df0afc5cd363aa3aacb089656eb0ef2a6ec1a4230728bbc6888d5af9" exitCode=0 Feb 16 17:20:23 crc kubenswrapper[4870]: I0216 17:20:23.894373 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39dde5ae-2522-43c8-a0e0-9e257052bab6","Type":"ContainerDied","Data":"ec58dcc7df0afc5cd363aa3aacb089656eb0ef2a6ec1a4230728bbc6888d5af9"} Feb 16 17:20:23 crc kubenswrapper[4870]: I0216 17:20:23.896842 4870 generic.go:334] "Generic (PLEG): container finished" podID="248e9264-b0ec-412b-aa16-0c3869d5f245" containerID="1bd04f1d0f661e8bd9a2941989e2c6b92edd63e39b11c93d58af6cb1e82330ad" exitCode=0 Feb 16 17:20:23 crc kubenswrapper[4870]: I0216 17:20:23.896897 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x74v2" event={"ID":"248e9264-b0ec-412b-aa16-0c3869d5f245","Type":"ContainerDied","Data":"1bd04f1d0f661e8bd9a2941989e2c6b92edd63e39b11c93d58af6cb1e82330ad"} Feb 16 17:20:24 crc kubenswrapper[4870]: I0216 17:20:24.097610 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 17:20:24 crc kubenswrapper[4870]: I0216 17:20:24.911098 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"669a24d2-3e17-4ce1-aba2-c45d2a92683a","Type":"ContainerStarted","Data":"feb8684f389acd584d25ba64863dc0b914ba3fdad66a66e60b0ed1e517a24659"} Feb 16 17:20:24 crc kubenswrapper[4870]: I0216 17:20:24.911504 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"669a24d2-3e17-4ce1-aba2-c45d2a92683a","Type":"ContainerStarted","Data":"76630133b01f73d1f4fe4789460fac798449248622780eaab286299c4540f574"} Feb 16 17:20:24 crc kubenswrapper[4870]: I0216 17:20:24.911525 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"669a24d2-3e17-4ce1-aba2-c45d2a92683a","Type":"ContainerStarted","Data":"b83e422d31579f7fe72ae97e16f1a4212ebefc714122bd527aadfcdc4992dbe3"} Feb 16 17:20:24 crc kubenswrapper[4870]: I0216 17:20:24.913282 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39dde5ae-2522-43c8-a0e0-9e257052bab6","Type":"ContainerStarted","Data":"99f604a2d9fd545d58c8f19c889a6837fd553436caf78592625e8cbc74abb18b"} Feb 16 17:20:24 crc kubenswrapper[4870]: I0216 17:20:24.959837 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=44.702076125 podStartE2EDuration="52.959817832s" podCreationTimestamp="2026-02-16 17:19:32 +0000 UTC" firstStartedPulling="2026-02-16 17:20:14.352552093 +0000 UTC m=+1218.836016467" lastFinishedPulling="2026-02-16 17:20:22.61029379 +0000 UTC m=+1227.093758174" observedRunningTime="2026-02-16 17:20:24.950761597 +0000 UTC m=+1229.434225971" watchObservedRunningTime="2026-02-16 17:20:24.959817832 +0000 UTC m=+1229.443282206" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.269429 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x74v2" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.296163 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-jmmxs"] Feb 16 17:20:25 crc kubenswrapper[4870]: E0216 17:20:25.296605 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="387afd6f-c438-44b0-ba75-db7c0ecd911b" containerName="mariadb-database-create" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.296627 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="387afd6f-c438-44b0-ba75-db7c0ecd911b" containerName="mariadb-database-create" Feb 16 17:20:25 crc kubenswrapper[4870]: E0216 17:20:25.296646 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7efe4b83-3d8c-4e4a-a7e0-0bbccb160506" containerName="mariadb-database-create" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.296653 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="7efe4b83-3d8c-4e4a-a7e0-0bbccb160506" containerName="mariadb-database-create" Feb 16 17:20:25 crc kubenswrapper[4870]: E0216 17:20:25.296667 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ee1b4f-d0ae-44c6-89fa-03712279a648" containerName="mariadb-account-create-update" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.296675 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ee1b4f-d0ae-44c6-89fa-03712279a648" containerName="mariadb-account-create-update" Feb 16 17:20:25 crc kubenswrapper[4870]: E0216 17:20:25.296695 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="248e9264-b0ec-412b-aa16-0c3869d5f245" containerName="keystone-db-sync" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.296702 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="248e9264-b0ec-412b-aa16-0c3869d5f245" containerName="keystone-db-sync" Feb 16 17:20:25 crc kubenswrapper[4870]: E0216 17:20:25.296724 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a05594c3-a7c9-4326-83d3-8602b8077b29" containerName="mariadb-account-create-update" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.296732 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="a05594c3-a7c9-4326-83d3-8602b8077b29" containerName="mariadb-account-create-update" Feb 16 17:20:25 crc kubenswrapper[4870]: E0216 17:20:25.296748 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5551a3-6fd8-479b-ad29-488078cc5ad1" containerName="mariadb-database-create" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.296757 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5551a3-6fd8-479b-ad29-488078cc5ad1" containerName="mariadb-database-create" Feb 16 17:20:25 crc kubenswrapper[4870]: E0216 17:20:25.296772 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aaab981-61b9-43df-a72c-8543ad202980" containerName="mariadb-account-create-update" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.296780 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aaab981-61b9-43df-a72c-8543ad202980" containerName="mariadb-account-create-update" Feb 16 17:20:25 crc kubenswrapper[4870]: E0216 17:20:25.296794 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c" containerName="ovn-config" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.296801 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c" containerName="ovn-config" Feb 16 17:20:25 crc kubenswrapper[4870]: E0216 17:20:25.296815 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b165ae4-b42a-4351-8bde-e88b7fa65137" containerName="mariadb-account-create-update" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.296822 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b165ae4-b42a-4351-8bde-e88b7fa65137" containerName="mariadb-account-create-update" Feb 16 17:20:25 crc kubenswrapper[4870]: E0216 17:20:25.296831 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c8030e8-0df0-478c-83b8-2144a0402358" containerName="mariadb-database-create" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.296837 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c8030e8-0df0-478c-83b8-2144a0402358" containerName="mariadb-database-create" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.297062 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c8030e8-0df0-478c-83b8-2144a0402358" containerName="mariadb-database-create" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.297080 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aaab981-61b9-43df-a72c-8543ad202980" containerName="mariadb-account-create-update" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.297096 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b165ae4-b42a-4351-8bde-e88b7fa65137" containerName="mariadb-account-create-update" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.297110 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a5551a3-6fd8-479b-ad29-488078cc5ad1" containerName="mariadb-database-create" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.297126 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="7efe4b83-3d8c-4e4a-a7e0-0bbccb160506" containerName="mariadb-database-create" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.297140 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="a05594c3-a7c9-4326-83d3-8602b8077b29" containerName="mariadb-account-create-update" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.297150 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="248e9264-b0ec-412b-aa16-0c3869d5f245" containerName="keystone-db-sync" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.297161 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="70ee1b4f-d0ae-44c6-89fa-03712279a648" containerName="mariadb-account-create-update" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.297176 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="387afd6f-c438-44b0-ba75-db7c0ecd911b" containerName="mariadb-database-create" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.297191 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="b867ab55-eb6f-4a2e-9ed6-a4fbfb7ba93c" containerName="ovn-config" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.298397 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.306350 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.308551 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-jmmxs"] Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.438027 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/248e9264-b0ec-412b-aa16-0c3869d5f245-config-data\") pod \"248e9264-b0ec-412b-aa16-0c3869d5f245\" (UID: \"248e9264-b0ec-412b-aa16-0c3869d5f245\") " Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.438128 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/248e9264-b0ec-412b-aa16-0c3869d5f245-combined-ca-bundle\") pod \"248e9264-b0ec-412b-aa16-0c3869d5f245\" (UID: \"248e9264-b0ec-412b-aa16-0c3869d5f245\") " Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.438172 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g6ks\" (UniqueName: \"kubernetes.io/projected/248e9264-b0ec-412b-aa16-0c3869d5f245-kube-api-access-4g6ks\") pod \"248e9264-b0ec-412b-aa16-0c3869d5f245\" (UID: \"248e9264-b0ec-412b-aa16-0c3869d5f245\") " Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.438931 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-config\") pod \"dnsmasq-dns-764c5664d7-jmmxs\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.439036 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff4sw\" (UniqueName: \"kubernetes.io/projected/709bdb2a-907c-41ab-bafc-08979f79771e-kube-api-access-ff4sw\") pod \"dnsmasq-dns-764c5664d7-jmmxs\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.439108 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-jmmxs\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.439135 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-jmmxs\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.439357 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-jmmxs\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.439392 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-dns-svc\") pod \"dnsmasq-dns-764c5664d7-jmmxs\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.442764 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/248e9264-b0ec-412b-aa16-0c3869d5f245-kube-api-access-4g6ks" (OuterVolumeSpecName: "kube-api-access-4g6ks") pod "248e9264-b0ec-412b-aa16-0c3869d5f245" (UID: "248e9264-b0ec-412b-aa16-0c3869d5f245"). InnerVolumeSpecName "kube-api-access-4g6ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.462150 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/248e9264-b0ec-412b-aa16-0c3869d5f245-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "248e9264-b0ec-412b-aa16-0c3869d5f245" (UID: "248e9264-b0ec-412b-aa16-0c3869d5f245"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.497997 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/248e9264-b0ec-412b-aa16-0c3869d5f245-config-data" (OuterVolumeSpecName: "config-data") pod "248e9264-b0ec-412b-aa16-0c3869d5f245" (UID: "248e9264-b0ec-412b-aa16-0c3869d5f245"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.541107 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-jmmxs\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.541150 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-jmmxs\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.541256 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-jmmxs\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.541281 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-dns-svc\") pod \"dnsmasq-dns-764c5664d7-jmmxs\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.541303 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-config\") pod \"dnsmasq-dns-764c5664d7-jmmxs\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.541335 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff4sw\" (UniqueName: \"kubernetes.io/projected/709bdb2a-907c-41ab-bafc-08979f79771e-kube-api-access-ff4sw\") pod \"dnsmasq-dns-764c5664d7-jmmxs\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.541384 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/248e9264-b0ec-412b-aa16-0c3869d5f245-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.541394 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/248e9264-b0ec-412b-aa16-0c3869d5f245-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.541406 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g6ks\" (UniqueName: \"kubernetes.io/projected/248e9264-b0ec-412b-aa16-0c3869d5f245-kube-api-access-4g6ks\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.542204 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-jmmxs\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.542301 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-dns-svc\") pod \"dnsmasq-dns-764c5664d7-jmmxs\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.542523 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-jmmxs\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.542819 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-config\") pod \"dnsmasq-dns-764c5664d7-jmmxs\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.543426 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-jmmxs\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.565113 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff4sw\" (UniqueName: \"kubernetes.io/projected/709bdb2a-907c-41ab-bafc-08979f79771e-kube-api-access-ff4sw\") pod \"dnsmasq-dns-764c5664d7-jmmxs\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.616759 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.923504 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x74v2" event={"ID":"248e9264-b0ec-412b-aa16-0c3869d5f245","Type":"ContainerDied","Data":"2a247dff549e4e0ed2ab02b276e268a29dee7d37e179f7dddaeb015a0b0edb8e"} Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.923725 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a247dff549e4e0ed2ab02b276e268a29dee7d37e179f7dddaeb015a0b0edb8e" Feb 16 17:20:25 crc kubenswrapper[4870]: I0216 17:20:25.923527 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x74v2" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.095990 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-jmmxs"] Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.283774 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-jmmxs"] Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.307908 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-shbhr"] Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.309457 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.314393 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.314581 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pm4j6" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.314738 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.314839 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.315005 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.363518 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-shbhr"] Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.389864 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-frpm6"] Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.409412 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.453023 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-frpm6"] Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.467453 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-credential-keys\") pod \"keystone-bootstrap-shbhr\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.467552 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-config-data\") pod \"keystone-bootstrap-shbhr\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.467596 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-scripts\") pod \"keystone-bootstrap-shbhr\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.467621 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-combined-ca-bundle\") pod \"keystone-bootstrap-shbhr\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.467686 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87h74\" (UniqueName: \"kubernetes.io/projected/84df055c-e479-445e-843b-eb84b43e3f7d-kube-api-access-87h74\") pod \"keystone-bootstrap-shbhr\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.467779 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-fernet-keys\") pod \"keystone-bootstrap-shbhr\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.573548 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-config\") pod \"dnsmasq-dns-5959f8865f-frpm6\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.573968 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87h74\" (UniqueName: \"kubernetes.io/projected/84df055c-e479-445e-843b-eb84b43e3f7d-kube-api-access-87h74\") pod \"keystone-bootstrap-shbhr\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.574034 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-frpm6\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.574184 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-fernet-keys\") pod \"keystone-bootstrap-shbhr\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.574278 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-frpm6\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.574338 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-frpm6\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.574369 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-credential-keys\") pod \"keystone-bootstrap-shbhr\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.574462 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-config-data\") pod \"keystone-bootstrap-shbhr\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.574491 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-dns-svc\") pod \"dnsmasq-dns-5959f8865f-frpm6\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.574544 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-scripts\") pod \"keystone-bootstrap-shbhr\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.574572 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-combined-ca-bundle\") pod \"keystone-bootstrap-shbhr\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.574620 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2rz6\" (UniqueName: \"kubernetes.io/projected/eae2f115-138b-415a-b173-6205d02ab9af-kube-api-access-r2rz6\") pod \"dnsmasq-dns-5959f8865f-frpm6\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.579869 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-combined-ca-bundle\") pod \"keystone-bootstrap-shbhr\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.587980 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-config-data\") pod \"keystone-bootstrap-shbhr\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.588576 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-fernet-keys\") pod \"keystone-bootstrap-shbhr\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.601417 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-scripts\") pod \"keystone-bootstrap-shbhr\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.604494 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-credential-keys\") pod \"keystone-bootstrap-shbhr\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.623030 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-4mwgd"] Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.624605 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.649979 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ggqxr" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.650205 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.650345 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.664353 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-vmbrl"] Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.665600 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vmbrl" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.669221 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87h74\" (UniqueName: \"kubernetes.io/projected/84df055c-e479-445e-843b-eb84b43e3f7d-kube-api-access-87h74\") pod \"keystone-bootstrap-shbhr\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.688699 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-frpm6\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.688763 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-frpm6\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.688824 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-dns-svc\") pod \"dnsmasq-dns-5959f8865f-frpm6\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.688878 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2rz6\" (UniqueName: \"kubernetes.io/projected/eae2f115-138b-415a-b173-6205d02ab9af-kube-api-access-r2rz6\") pod \"dnsmasq-dns-5959f8865f-frpm6\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.688914 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-config\") pod \"dnsmasq-dns-5959f8865f-frpm6\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.688973 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-frpm6\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.690672 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-frpm6\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.692145 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-dns-svc\") pod \"dnsmasq-dns-5959f8865f-frpm6\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.695068 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-config\") pod \"dnsmasq-dns-5959f8865f-frpm6\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.695198 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-frpm6\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.699821 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-frpm6\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.725566 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.725758 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qr49x" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.725862 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.728361 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.730689 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.737751 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.741116 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.748478 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-6hkdm"] Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.749740 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-6hkdm" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.772433 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.772602 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-2nm29" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.772637 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.773145 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.776858 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4mwgd"] Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.788021 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vmbrl"] Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.790475 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9719dd82-cec9-4a56-ae93-29ccca75a3ef-etc-machine-id\") pod \"cinder-db-sync-4mwgd\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.790520 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-config-data\") pod \"cinder-db-sync-4mwgd\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.790562 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-db-sync-config-data\") pod \"cinder-db-sync-4mwgd\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.790592 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh42k\" (UniqueName: \"kubernetes.io/projected/913c8c11-d196-4f95-9aba-a4552bcbef88-kube-api-access-vh42k\") pod \"neutron-db-sync-vmbrl\" (UID: \"913c8c11-d196-4f95-9aba-a4552bcbef88\") " pod="openstack/neutron-db-sync-vmbrl" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.790608 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-combined-ca-bundle\") pod \"cinder-db-sync-4mwgd\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.790644 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhj74\" (UniqueName: \"kubernetes.io/projected/9719dd82-cec9-4a56-ae93-29ccca75a3ef-kube-api-access-fhj74\") pod \"cinder-db-sync-4mwgd\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.790662 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913c8c11-d196-4f95-9aba-a4552bcbef88-combined-ca-bundle\") pod \"neutron-db-sync-vmbrl\" (UID: \"913c8c11-d196-4f95-9aba-a4552bcbef88\") " pod="openstack/neutron-db-sync-vmbrl" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.790726 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/913c8c11-d196-4f95-9aba-a4552bcbef88-config\") pod \"neutron-db-sync-vmbrl\" (UID: \"913c8c11-d196-4f95-9aba-a4552bcbef88\") " pod="openstack/neutron-db-sync-vmbrl" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.790747 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-scripts\") pod \"cinder-db-sync-4mwgd\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.800262 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.824026 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-6hkdm"] Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.873700 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.892303 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " pod="openstack/ceilometer-0" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.892360 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38836e81-1b99-4b50-ada2-40727db1f248-log-httpd\") pod \"ceilometer-0\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " pod="openstack/ceilometer-0" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.892396 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-config-data\") pod \"ceilometer-0\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " pod="openstack/ceilometer-0" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.892438 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-db-sync-config-data\") pod \"cinder-db-sync-4mwgd\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.892479 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh42k\" (UniqueName: \"kubernetes.io/projected/913c8c11-d196-4f95-9aba-a4552bcbef88-kube-api-access-vh42k\") pod \"neutron-db-sync-vmbrl\" (UID: \"913c8c11-d196-4f95-9aba-a4552bcbef88\") " pod="openstack/neutron-db-sync-vmbrl" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.892503 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-combined-ca-bundle\") pod \"cinder-db-sync-4mwgd\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.892527 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-scripts\") pod \"ceilometer-0\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " pod="openstack/ceilometer-0" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.892547 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38836e81-1b99-4b50-ada2-40727db1f248-run-httpd\") pod \"ceilometer-0\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " pod="openstack/ceilometer-0" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.892588 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a86750-1fff-4add-8462-7ab805ec7f89-config-data\") pod \"cloudkitty-db-sync-6hkdm\" (UID: \"34a86750-1fff-4add-8462-7ab805ec7f89\") " pod="openstack/cloudkitty-db-sync-6hkdm" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.892617 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhj74\" (UniqueName: \"kubernetes.io/projected/9719dd82-cec9-4a56-ae93-29ccca75a3ef-kube-api-access-fhj74\") pod \"cinder-db-sync-4mwgd\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.892640 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34a86750-1fff-4add-8462-7ab805ec7f89-scripts\") pod \"cloudkitty-db-sync-6hkdm\" (UID: \"34a86750-1fff-4add-8462-7ab805ec7f89\") " pod="openstack/cloudkitty-db-sync-6hkdm" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.892666 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913c8c11-d196-4f95-9aba-a4552bcbef88-combined-ca-bundle\") pod \"neutron-db-sync-vmbrl\" (UID: \"913c8c11-d196-4f95-9aba-a4552bcbef88\") " pod="openstack/neutron-db-sync-vmbrl" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.892737 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/913c8c11-d196-4f95-9aba-a4552bcbef88-config\") pod \"neutron-db-sync-vmbrl\" (UID: \"913c8c11-d196-4f95-9aba-a4552bcbef88\") " pod="openstack/neutron-db-sync-vmbrl" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.892758 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-scripts\") pod \"cinder-db-sync-4mwgd\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.892795 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b2p7\" (UniqueName: \"kubernetes.io/projected/34a86750-1fff-4add-8462-7ab805ec7f89-kube-api-access-7b2p7\") pod \"cloudkitty-db-sync-6hkdm\" (UID: \"34a86750-1fff-4add-8462-7ab805ec7f89\") " pod="openstack/cloudkitty-db-sync-6hkdm" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.892824 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " pod="openstack/ceilometer-0" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.892865 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a86750-1fff-4add-8462-7ab805ec7f89-combined-ca-bundle\") pod \"cloudkitty-db-sync-6hkdm\" (UID: \"34a86750-1fff-4add-8462-7ab805ec7f89\") " pod="openstack/cloudkitty-db-sync-6hkdm" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.892896 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnjmz\" (UniqueName: \"kubernetes.io/projected/38836e81-1b99-4b50-ada2-40727db1f248-kube-api-access-nnjmz\") pod \"ceilometer-0\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " pod="openstack/ceilometer-0" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.892927 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9719dd82-cec9-4a56-ae93-29ccca75a3ef-etc-machine-id\") pod \"cinder-db-sync-4mwgd\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.892978 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-config-data\") pod \"cinder-db-sync-4mwgd\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.893010 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/34a86750-1fff-4add-8462-7ab805ec7f89-certs\") pod \"cloudkitty-db-sync-6hkdm\" (UID: \"34a86750-1fff-4add-8462-7ab805ec7f89\") " pod="openstack/cloudkitty-db-sync-6hkdm" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.906089 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9719dd82-cec9-4a56-ae93-29ccca75a3ef-etc-machine-id\") pod \"cinder-db-sync-4mwgd\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.932158 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-frpm6"] Feb 16 17:20:26 crc kubenswrapper[4870]: E0216 17:20:26.932857 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-r2rz6], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5959f8865f-frpm6" podUID="eae2f115-138b-415a-b173-6205d02ab9af" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.955461 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" event={"ID":"709bdb2a-907c-41ab-bafc-08979f79771e","Type":"ContainerStarted","Data":"cdbe8d1cb18691a2c1e3e3394a9d3cdb56ffeadc557c2f9f49a1278285950fff"} Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.990307 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-s4xns"] Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.991600 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s4xns" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.995818 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/34a86750-1fff-4add-8462-7ab805ec7f89-certs\") pod \"cloudkitty-db-sync-6hkdm\" (UID: \"34a86750-1fff-4add-8462-7ab805ec7f89\") " pod="openstack/cloudkitty-db-sync-6hkdm" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.995864 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " pod="openstack/ceilometer-0" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.995884 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38836e81-1b99-4b50-ada2-40727db1f248-log-httpd\") pod \"ceilometer-0\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " pod="openstack/ceilometer-0" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.995903 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-config-data\") pod \"ceilometer-0\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " pod="openstack/ceilometer-0" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.995974 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-scripts\") pod \"ceilometer-0\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " pod="openstack/ceilometer-0" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.995989 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38836e81-1b99-4b50-ada2-40727db1f248-run-httpd\") pod \"ceilometer-0\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " pod="openstack/ceilometer-0" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.996014 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a86750-1fff-4add-8462-7ab805ec7f89-config-data\") pod \"cloudkitty-db-sync-6hkdm\" (UID: \"34a86750-1fff-4add-8462-7ab805ec7f89\") " pod="openstack/cloudkitty-db-sync-6hkdm" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.996037 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34a86750-1fff-4add-8462-7ab805ec7f89-scripts\") pod \"cloudkitty-db-sync-6hkdm\" (UID: \"34a86750-1fff-4add-8462-7ab805ec7f89\") " pod="openstack/cloudkitty-db-sync-6hkdm" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.996117 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b2p7\" (UniqueName: \"kubernetes.io/projected/34a86750-1fff-4add-8462-7ab805ec7f89-kube-api-access-7b2p7\") pod \"cloudkitty-db-sync-6hkdm\" (UID: \"34a86750-1fff-4add-8462-7ab805ec7f89\") " pod="openstack/cloudkitty-db-sync-6hkdm" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.996139 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " pod="openstack/ceilometer-0" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.996173 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a86750-1fff-4add-8462-7ab805ec7f89-combined-ca-bundle\") pod \"cloudkitty-db-sync-6hkdm\" (UID: \"34a86750-1fff-4add-8462-7ab805ec7f89\") " pod="openstack/cloudkitty-db-sync-6hkdm" Feb 16 17:20:26 crc kubenswrapper[4870]: I0216 17:20:26.996194 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnjmz\" (UniqueName: \"kubernetes.io/projected/38836e81-1b99-4b50-ada2-40727db1f248-kube-api-access-nnjmz\") pod \"ceilometer-0\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " pod="openstack/ceilometer-0" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:26.997466 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38836e81-1b99-4b50-ada2-40727db1f248-run-httpd\") pod \"ceilometer-0\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " pod="openstack/ceilometer-0" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.015479 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-s4xns"] Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.015867 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38836e81-1b99-4b50-ada2-40727db1f248-log-httpd\") pod \"ceilometer-0\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " pod="openstack/ceilometer-0" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.024538 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-h5smh" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.025138 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.076696 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-zgtq8"] Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.078699 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.111477 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/375ecf8f-1d93-40fb-85dc-c0eabcef46c3-db-sync-config-data\") pod \"barbican-db-sync-s4xns\" (UID: \"375ecf8f-1d93-40fb-85dc-c0eabcef46c3\") " pod="openstack/barbican-db-sync-s4xns" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.111620 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpznb\" (UniqueName: \"kubernetes.io/projected/375ecf8f-1d93-40fb-85dc-c0eabcef46c3-kube-api-access-zpznb\") pod \"barbican-db-sync-s4xns\" (UID: \"375ecf8f-1d93-40fb-85dc-c0eabcef46c3\") " pod="openstack/barbican-db-sync-s4xns" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.111739 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/375ecf8f-1d93-40fb-85dc-c0eabcef46c3-combined-ca-bundle\") pod \"barbican-db-sync-s4xns\" (UID: \"375ecf8f-1d93-40fb-85dc-c0eabcef46c3\") " pod="openstack/barbican-db-sync-s4xns" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.126155 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2rz6\" (UniqueName: \"kubernetes.io/projected/eae2f115-138b-415a-b173-6205d02ab9af-kube-api-access-r2rz6\") pod \"dnsmasq-dns-5959f8865f-frpm6\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.128822 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-combined-ca-bundle\") pod \"cinder-db-sync-4mwgd\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.132057 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/913c8c11-d196-4f95-9aba-a4552bcbef88-config\") pod \"neutron-db-sync-vmbrl\" (UID: \"913c8c11-d196-4f95-9aba-a4552bcbef88\") " pod="openstack/neutron-db-sync-vmbrl" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.148747 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-db-sync-config-data\") pod \"cinder-db-sync-4mwgd\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.149368 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-scripts\") pod \"cinder-db-sync-4mwgd\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.149548 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-config-data\") pod \"cinder-db-sync-4mwgd\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.153389 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhj74\" (UniqueName: \"kubernetes.io/projected/9719dd82-cec9-4a56-ae93-29ccca75a3ef-kube-api-access-fhj74\") pod \"cinder-db-sync-4mwgd\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.152937 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh42k\" (UniqueName: \"kubernetes.io/projected/913c8c11-d196-4f95-9aba-a4552bcbef88-kube-api-access-vh42k\") pod \"neutron-db-sync-vmbrl\" (UID: \"913c8c11-d196-4f95-9aba-a4552bcbef88\") " pod="openstack/neutron-db-sync-vmbrl" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.154494 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913c8c11-d196-4f95-9aba-a4552bcbef88-combined-ca-bundle\") pod \"neutron-db-sync-vmbrl\" (UID: \"913c8c11-d196-4f95-9aba-a4552bcbef88\") " pod="openstack/neutron-db-sync-vmbrl" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.160755 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/34a86750-1fff-4add-8462-7ab805ec7f89-certs\") pod \"cloudkitty-db-sync-6hkdm\" (UID: \"34a86750-1fff-4add-8462-7ab805ec7f89\") " pod="openstack/cloudkitty-db-sync-6hkdm" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.163529 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-config-data\") pod \"ceilometer-0\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " pod="openstack/ceilometer-0" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.164874 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " pod="openstack/ceilometer-0" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.173228 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34a86750-1fff-4add-8462-7ab805ec7f89-scripts\") pod \"cloudkitty-db-sync-6hkdm\" (UID: \"34a86750-1fff-4add-8462-7ab805ec7f89\") " pod="openstack/cloudkitty-db-sync-6hkdm" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.174031 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " pod="openstack/ceilometer-0" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.174090 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a86750-1fff-4add-8462-7ab805ec7f89-config-data\") pod \"cloudkitty-db-sync-6hkdm\" (UID: \"34a86750-1fff-4add-8462-7ab805ec7f89\") " pod="openstack/cloudkitty-db-sync-6hkdm" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.175365 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a86750-1fff-4add-8462-7ab805ec7f89-combined-ca-bundle\") pod \"cloudkitty-db-sync-6hkdm\" (UID: \"34a86750-1fff-4add-8462-7ab805ec7f89\") " pod="openstack/cloudkitty-db-sync-6hkdm" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.181115 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-9n2tj"] Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.183887 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9n2tj" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.185813 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-scripts\") pod \"ceilometer-0\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " pod="openstack/ceilometer-0" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.186813 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.187290 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.190150 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-zgtq8"] Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.191886 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4vpb8" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.197650 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b2p7\" (UniqueName: \"kubernetes.io/projected/34a86750-1fff-4add-8462-7ab805ec7f89-kube-api-access-7b2p7\") pod \"cloudkitty-db-sync-6hkdm\" (UID: \"34a86750-1fff-4add-8462-7ab805ec7f89\") " pod="openstack/cloudkitty-db-sync-6hkdm" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.202364 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnjmz\" (UniqueName: \"kubernetes.io/projected/38836e81-1b99-4b50-ada2-40727db1f248-kube-api-access-nnjmz\") pod \"ceilometer-0\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " pod="openstack/ceilometer-0" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.213208 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/375ecf8f-1d93-40fb-85dc-c0eabcef46c3-db-sync-config-data\") pod \"barbican-db-sync-s4xns\" (UID: \"375ecf8f-1d93-40fb-85dc-c0eabcef46c3\") " pod="openstack/barbican-db-sync-s4xns" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.214470 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-zgtq8\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.214521 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpznb\" (UniqueName: \"kubernetes.io/projected/375ecf8f-1d93-40fb-85dc-c0eabcef46c3-kube-api-access-zpznb\") pod \"barbican-db-sync-s4xns\" (UID: \"375ecf8f-1d93-40fb-85dc-c0eabcef46c3\") " pod="openstack/barbican-db-sync-s4xns" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.214559 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-zgtq8\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.214618 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/375ecf8f-1d93-40fb-85dc-c0eabcef46c3-combined-ca-bundle\") pod \"barbican-db-sync-s4xns\" (UID: \"375ecf8f-1d93-40fb-85dc-c0eabcef46c3\") " pod="openstack/barbican-db-sync-s4xns" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.214637 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-zgtq8\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.214715 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-zgtq8\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.214788 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-config\") pod \"dnsmasq-dns-58dd9ff6bc-zgtq8\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.214807 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnr24\" (UniqueName: \"kubernetes.io/projected/6bcafa21-00bc-4d37-9294-c3f378c43012-kube-api-access-tnr24\") pod \"dnsmasq-dns-58dd9ff6bc-zgtq8\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.216788 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9n2tj"] Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.219543 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/375ecf8f-1d93-40fb-85dc-c0eabcef46c3-combined-ca-bundle\") pod \"barbican-db-sync-s4xns\" (UID: \"375ecf8f-1d93-40fb-85dc-c0eabcef46c3\") " pod="openstack/barbican-db-sync-s4xns" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.224069 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/375ecf8f-1d93-40fb-85dc-c0eabcef46c3-db-sync-config-data\") pod \"barbican-db-sync-s4xns\" (UID: \"375ecf8f-1d93-40fb-85dc-c0eabcef46c3\") " pod="openstack/barbican-db-sync-s4xns" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.234372 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpznb\" (UniqueName: \"kubernetes.io/projected/375ecf8f-1d93-40fb-85dc-c0eabcef46c3-kube-api-access-zpznb\") pod \"barbican-db-sync-s4xns\" (UID: \"375ecf8f-1d93-40fb-85dc-c0eabcef46c3\") " pod="openstack/barbican-db-sync-s4xns" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.316921 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-zgtq8\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.317139 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-config\") pod \"dnsmasq-dns-58dd9ff6bc-zgtq8\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.317209 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnr24\" (UniqueName: \"kubernetes.io/projected/6bcafa21-00bc-4d37-9294-c3f378c43012-kube-api-access-tnr24\") pod \"dnsmasq-dns-58dd9ff6bc-zgtq8\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.317405 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-zgtq8\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.317450 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpgz9\" (UniqueName: \"kubernetes.io/projected/6c6489e4-d44c-4e7d-a451-620da210060e-kube-api-access-fpgz9\") pod \"placement-db-sync-9n2tj\" (UID: \"6c6489e4-d44c-4e7d-a451-620da210060e\") " pod="openstack/placement-db-sync-9n2tj" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.317491 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-zgtq8\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.317507 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6489e4-d44c-4e7d-a451-620da210060e-config-data\") pod \"placement-db-sync-9n2tj\" (UID: \"6c6489e4-d44c-4e7d-a451-620da210060e\") " pod="openstack/placement-db-sync-9n2tj" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.317546 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-zgtq8\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.317698 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6489e4-d44c-4e7d-a451-620da210060e-combined-ca-bundle\") pod \"placement-db-sync-9n2tj\" (UID: \"6c6489e4-d44c-4e7d-a451-620da210060e\") " pod="openstack/placement-db-sync-9n2tj" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.317736 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c6489e4-d44c-4e7d-a451-620da210060e-scripts\") pod \"placement-db-sync-9n2tj\" (UID: \"6c6489e4-d44c-4e7d-a451-620da210060e\") " pod="openstack/placement-db-sync-9n2tj" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.317751 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c6489e4-d44c-4e7d-a451-620da210060e-logs\") pod \"placement-db-sync-9n2tj\" (UID: \"6c6489e4-d44c-4e7d-a451-620da210060e\") " pod="openstack/placement-db-sync-9n2tj" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.318153 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-zgtq8\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.318368 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-config\") pod \"dnsmasq-dns-58dd9ff6bc-zgtq8\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.321375 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-zgtq8\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.321996 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-zgtq8\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.322038 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-zgtq8\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.340716 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnr24\" (UniqueName: \"kubernetes.io/projected/6bcafa21-00bc-4d37-9294-c3f378c43012-kube-api-access-tnr24\") pod \"dnsmasq-dns-58dd9ff6bc-zgtq8\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.410600 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.420934 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpgz9\" (UniqueName: \"kubernetes.io/projected/6c6489e4-d44c-4e7d-a451-620da210060e-kube-api-access-fpgz9\") pod \"placement-db-sync-9n2tj\" (UID: \"6c6489e4-d44c-4e7d-a451-620da210060e\") " pod="openstack/placement-db-sync-9n2tj" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.421032 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6489e4-d44c-4e7d-a451-620da210060e-config-data\") pod \"placement-db-sync-9n2tj\" (UID: \"6c6489e4-d44c-4e7d-a451-620da210060e\") " pod="openstack/placement-db-sync-9n2tj" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.421097 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6489e4-d44c-4e7d-a451-620da210060e-combined-ca-bundle\") pod \"placement-db-sync-9n2tj\" (UID: \"6c6489e4-d44c-4e7d-a451-620da210060e\") " pod="openstack/placement-db-sync-9n2tj" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.421124 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c6489e4-d44c-4e7d-a451-620da210060e-scripts\") pod \"placement-db-sync-9n2tj\" (UID: \"6c6489e4-d44c-4e7d-a451-620da210060e\") " pod="openstack/placement-db-sync-9n2tj" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.421146 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c6489e4-d44c-4e7d-a451-620da210060e-logs\") pod \"placement-db-sync-9n2tj\" (UID: \"6c6489e4-d44c-4e7d-a451-620da210060e\") " pod="openstack/placement-db-sync-9n2tj" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.422289 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c6489e4-d44c-4e7d-a451-620da210060e-logs\") pod \"placement-db-sync-9n2tj\" (UID: \"6c6489e4-d44c-4e7d-a451-620da210060e\") " pod="openstack/placement-db-sync-9n2tj" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.425310 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6489e4-d44c-4e7d-a451-620da210060e-combined-ca-bundle\") pod \"placement-db-sync-9n2tj\" (UID: \"6c6489e4-d44c-4e7d-a451-620da210060e\") " pod="openstack/placement-db-sync-9n2tj" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.433840 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c6489e4-d44c-4e7d-a451-620da210060e-scripts\") pod \"placement-db-sync-9n2tj\" (UID: \"6c6489e4-d44c-4e7d-a451-620da210060e\") " pod="openstack/placement-db-sync-9n2tj" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.436897 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6489e4-d44c-4e7d-a451-620da210060e-config-data\") pod \"placement-db-sync-9n2tj\" (UID: \"6c6489e4-d44c-4e7d-a451-620da210060e\") " pod="openstack/placement-db-sync-9n2tj" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.444064 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpgz9\" (UniqueName: \"kubernetes.io/projected/6c6489e4-d44c-4e7d-a451-620da210060e-kube-api-access-fpgz9\") pod \"placement-db-sync-9n2tj\" (UID: \"6c6489e4-d44c-4e7d-a451-620da210060e\") " pod="openstack/placement-db-sync-9n2tj" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.477659 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vmbrl" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.499089 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.510840 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-6hkdm" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.521377 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s4xns" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.533290 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.542307 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9n2tj" Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.714024 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-shbhr"] Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.812859 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4mwgd"] Feb 16 17:20:27 crc kubenswrapper[4870]: I0216 17:20:27.993475 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-shbhr" event={"ID":"84df055c-e479-445e-843b-eb84b43e3f7d","Type":"ContainerStarted","Data":"600f6d520d7cdf0f232be4eab0d91247d68d8fc23784074e67a76d9067738e67"} Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.038197 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39dde5ae-2522-43c8-a0e0-9e257052bab6","Type":"ContainerStarted","Data":"a5863345c0a4ef948392a543e1826ebda5c91cc01c924c03a09810dfc1064310"} Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.050292 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4mwgd" event={"ID":"9719dd82-cec9-4a56-ae93-29ccca75a3ef","Type":"ContainerStarted","Data":"f4c248383c60ec58d16a019495fba4b1aa73de9677bac50cfcd7b99e34cb3780"} Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.067621 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5r2tl" event={"ID":"998e2386-0941-4f2b-8e23-d77138831ad4","Type":"ContainerStarted","Data":"5e2622801776ff1c1cd43fd2ec2e7f94f8dcbc4d95b6b312535e6b3a306936fe"} Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.081443 4870 generic.go:334] "Generic (PLEG): container finished" podID="709bdb2a-907c-41ab-bafc-08979f79771e" containerID="bac6fd3a2e40947b83ae49a06b526dbaf5d890b8dcd7a21448a705703bf92676" exitCode=0 Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.081531 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.082231 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" event={"ID":"709bdb2a-907c-41ab-bafc-08979f79771e","Type":"ContainerDied","Data":"bac6fd3a2e40947b83ae49a06b526dbaf5d890b8dcd7a21448a705703bf92676"} Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.108856 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-5r2tl" podStartSLOduration=4.892070328 podStartE2EDuration="33.108829489s" podCreationTimestamp="2026-02-16 17:19:55 +0000 UTC" firstStartedPulling="2026-02-16 17:19:57.449676291 +0000 UTC m=+1201.933140675" lastFinishedPulling="2026-02-16 17:20:25.666435452 +0000 UTC m=+1230.149899836" observedRunningTime="2026-02-16 17:20:28.097251698 +0000 UTC m=+1232.580716092" watchObservedRunningTime="2026-02-16 17:20:28.108829489 +0000 UTC m=+1232.592293873" Feb 16 17:20:28 crc kubenswrapper[4870]: W0216 17:20:28.120371 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod913c8c11_d196_4f95_9aba_a4552bcbef88.slice/crio-ae41561c78e90ecb180d7bd4b04a7f470c2b13af6b9fefae0fc379889019f5c2 WatchSource:0}: Error finding container ae41561c78e90ecb180d7bd4b04a7f470c2b13af6b9fefae0fc379889019f5c2: Status 404 returned error can't find the container with id ae41561c78e90ecb180d7bd4b04a7f470c2b13af6b9fefae0fc379889019f5c2 Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.120909 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.125380 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vmbrl"] Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.263621 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-dns-swift-storage-0\") pod \"eae2f115-138b-415a-b173-6205d02ab9af\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.263783 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-config\") pod \"eae2f115-138b-415a-b173-6205d02ab9af\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.263824 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-ovsdbserver-sb\") pod \"eae2f115-138b-415a-b173-6205d02ab9af\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.263928 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2rz6\" (UniqueName: \"kubernetes.io/projected/eae2f115-138b-415a-b173-6205d02ab9af-kube-api-access-r2rz6\") pod \"eae2f115-138b-415a-b173-6205d02ab9af\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.263998 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-dns-svc\") pod \"eae2f115-138b-415a-b173-6205d02ab9af\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.264064 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-ovsdbserver-nb\") pod \"eae2f115-138b-415a-b173-6205d02ab9af\" (UID: \"eae2f115-138b-415a-b173-6205d02ab9af\") " Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.265170 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eae2f115-138b-415a-b173-6205d02ab9af" (UID: "eae2f115-138b-415a-b173-6205d02ab9af"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.265526 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eae2f115-138b-415a-b173-6205d02ab9af" (UID: "eae2f115-138b-415a-b173-6205d02ab9af"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.265782 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eae2f115-138b-415a-b173-6205d02ab9af" (UID: "eae2f115-138b-415a-b173-6205d02ab9af"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.266007 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eae2f115-138b-415a-b173-6205d02ab9af" (UID: "eae2f115-138b-415a-b173-6205d02ab9af"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.266065 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-config" (OuterVolumeSpecName: "config") pod "eae2f115-138b-415a-b173-6205d02ab9af" (UID: "eae2f115-138b-415a-b173-6205d02ab9af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.276401 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eae2f115-138b-415a-b173-6205d02ab9af-kube-api-access-r2rz6" (OuterVolumeSpecName: "kube-api-access-r2rz6") pod "eae2f115-138b-415a-b173-6205d02ab9af" (UID: "eae2f115-138b-415a-b173-6205d02ab9af"). InnerVolumeSpecName "kube-api-access-r2rz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.329587 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.366041 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.366067 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2rz6\" (UniqueName: \"kubernetes.io/projected/eae2f115-138b-415a-b173-6205d02ab9af-kube-api-access-r2rz6\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.366082 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.366096 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.366109 4870 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.366121 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eae2f115-138b-415a-b173-6205d02ab9af-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.755287 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9n2tj"] Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.778097 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-s4xns"] Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.806689 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-zgtq8"] Feb 16 17:20:28 crc kubenswrapper[4870]: W0216 17:20:28.810125 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod375ecf8f_1d93_40fb_85dc_c0eabcef46c3.slice/crio-a4ebaf774ff5fcd91c7e4267d8c6555edb91d24ca103cad17035ab7b89bf58e8 WatchSource:0}: Error finding container a4ebaf774ff5fcd91c7e4267d8c6555edb91d24ca103cad17035ab7b89bf58e8: Status 404 returned error can't find the container with id a4ebaf774ff5fcd91c7e4267d8c6555edb91d24ca103cad17035ab7b89bf58e8 Feb 16 17:20:28 crc kubenswrapper[4870]: W0216 17:20:28.818825 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bcafa21_00bc_4d37_9294_c3f378c43012.slice/crio-d08c720a8368340a97f4b1d18091a9aa9f1200eccbaeff851f5a751179c9e079 WatchSource:0}: Error finding container d08c720a8368340a97f4b1d18091a9aa9f1200eccbaeff851f5a751179c9e079: Status 404 returned error can't find the container with id d08c720a8368340a97f4b1d18091a9aa9f1200eccbaeff851f5a751179c9e079 Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.824745 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-6hkdm"] Feb 16 17:20:28 crc kubenswrapper[4870]: I0216 17:20:28.939924 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:29 crc kubenswrapper[4870]: E0216 17:20:29.011092 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:20:29 crc kubenswrapper[4870]: E0216 17:20:29.011160 4870 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:20:29 crc kubenswrapper[4870]: E0216 17:20:29.011300 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7b2p7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6hkdm_openstack(34a86750-1fff-4add-8462-7ab805ec7f89): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:20:29 crc kubenswrapper[4870]: E0216 17:20:29.015677 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.086036 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-dns-svc\") pod \"709bdb2a-907c-41ab-bafc-08979f79771e\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.086142 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff4sw\" (UniqueName: \"kubernetes.io/projected/709bdb2a-907c-41ab-bafc-08979f79771e-kube-api-access-ff4sw\") pod \"709bdb2a-907c-41ab-bafc-08979f79771e\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.086220 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-ovsdbserver-nb\") pod \"709bdb2a-907c-41ab-bafc-08979f79771e\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.086270 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-dns-swift-storage-0\") pod \"709bdb2a-907c-41ab-bafc-08979f79771e\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.086343 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-config\") pod \"709bdb2a-907c-41ab-bafc-08979f79771e\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.086377 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-ovsdbserver-sb\") pod \"709bdb2a-907c-41ab-bafc-08979f79771e\" (UID: \"709bdb2a-907c-41ab-bafc-08979f79771e\") " Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.096197 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/709bdb2a-907c-41ab-bafc-08979f79771e-kube-api-access-ff4sw" (OuterVolumeSpecName: "kube-api-access-ff4sw") pod "709bdb2a-907c-41ab-bafc-08979f79771e" (UID: "709bdb2a-907c-41ab-bafc-08979f79771e"). InnerVolumeSpecName "kube-api-access-ff4sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.121752 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-6hkdm" event={"ID":"34a86750-1fff-4add-8462-7ab805ec7f89","Type":"ContainerStarted","Data":"8d6dc806a7648f90a506010c9b455b6dfc9847f6acc7462654fb03114ea91153"} Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.123358 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "709bdb2a-907c-41ab-bafc-08979f79771e" (UID: "709bdb2a-907c-41ab-bafc-08979f79771e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:29 crc kubenswrapper[4870]: E0216 17:20:29.123864 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.124830 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9n2tj" event={"ID":"6c6489e4-d44c-4e7d-a451-620da210060e","Type":"ContainerStarted","Data":"91abbd2ce6f65a5e2935c0c4fceb6d5d54cee6e04893c05a208a6f214fdc47eb"} Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.127438 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s4xns" event={"ID":"375ecf8f-1d93-40fb-85dc-c0eabcef46c3","Type":"ContainerStarted","Data":"a4ebaf774ff5fcd91c7e4267d8c6555edb91d24ca103cad17035ab7b89bf58e8"} Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.147430 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" event={"ID":"6bcafa21-00bc-4d37-9294-c3f378c43012","Type":"ContainerStarted","Data":"d08c720a8368340a97f4b1d18091a9aa9f1200eccbaeff851f5a751179c9e079"} Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.150979 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-config" (OuterVolumeSpecName: "config") pod "709bdb2a-907c-41ab-bafc-08979f79771e" (UID: "709bdb2a-907c-41ab-bafc-08979f79771e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.154437 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "709bdb2a-907c-41ab-bafc-08979f79771e" (UID: "709bdb2a-907c-41ab-bafc-08979f79771e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.163016 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38836e81-1b99-4b50-ada2-40727db1f248","Type":"ContainerStarted","Data":"e12bfe26cc023fa1a43c28d86e2efe388cb80db9a8d047be6d78d74d39f21fe4"} Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.165403 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" event={"ID":"709bdb2a-907c-41ab-bafc-08979f79771e","Type":"ContainerDied","Data":"cdbe8d1cb18691a2c1e3e3394a9d3cdb56ffeadc557c2f9f49a1278285950fff"} Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.165450 4870 scope.go:117] "RemoveContainer" containerID="bac6fd3a2e40947b83ae49a06b526dbaf5d890b8dcd7a21448a705703bf92676" Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.165641 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-jmmxs" Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.182758 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vmbrl" event={"ID":"913c8c11-d196-4f95-9aba-a4552bcbef88","Type":"ContainerStarted","Data":"2a6bc0ee4027889558f6b7a4fce9de3b3296fcd1cfa1a1b7bb384094461632ce"} Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.182806 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vmbrl" event={"ID":"913c8c11-d196-4f95-9aba-a4552bcbef88","Type":"ContainerStarted","Data":"ae41561c78e90ecb180d7bd4b04a7f470c2b13af6b9fefae0fc379889019f5c2"} Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.188143 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.188169 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.188180 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.188189 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff4sw\" (UniqueName: \"kubernetes.io/projected/709bdb2a-907c-41ab-bafc-08979f79771e-kube-api-access-ff4sw\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.188456 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "709bdb2a-907c-41ab-bafc-08979f79771e" (UID: "709bdb2a-907c-41ab-bafc-08979f79771e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.189510 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-shbhr" event={"ID":"84df055c-e479-445e-843b-eb84b43e3f7d","Type":"ContainerStarted","Data":"3f8171683589800873f30d0092fa287a67395ef35b531b5c45043adbdd173150"} Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.194788 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "709bdb2a-907c-41ab-bafc-08979f79771e" (UID: "709bdb2a-907c-41ab-bafc-08979f79771e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.206025 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-vmbrl" podStartSLOduration=3.206007933 podStartE2EDuration="3.206007933s" podCreationTimestamp="2026-02-16 17:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:20:29.204268863 +0000 UTC m=+1233.687733277" watchObservedRunningTime="2026-02-16 17:20:29.206007933 +0000 UTC m=+1233.689472317" Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.209528 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-frpm6" Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.212784 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39dde5ae-2522-43c8-a0e0-9e257052bab6","Type":"ContainerStarted","Data":"1d0ed86ea3cc12ac65dfe82d60846f9b587f586c6773536f1f6b5f1ca2c87b95"} Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.279065 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-shbhr" podStartSLOduration=3.279042666 podStartE2EDuration="3.279042666s" podCreationTimestamp="2026-02-16 17:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:20:29.226339083 +0000 UTC m=+1233.709803467" watchObservedRunningTime="2026-02-16 17:20:29.279042666 +0000 UTC m=+1233.762507050" Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.292591 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.292572452 podStartE2EDuration="16.292572452s" podCreationTimestamp="2026-02-16 17:20:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:20:29.277670247 +0000 UTC m=+1233.761134641" watchObservedRunningTime="2026-02-16 17:20:29.292572452 +0000 UTC m=+1233.776036836" Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.294181 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.295028 4870 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/709bdb2a-907c-41ab-bafc-08979f79771e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.365114 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-frpm6"] Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.405040 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-frpm6"] Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.637770 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-jmmxs"] Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.665536 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-jmmxs"] Feb 16 17:20:29 crc kubenswrapper[4870]: I0216 17:20:29.872590 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:20:30 crc kubenswrapper[4870]: I0216 17:20:30.238315 4870 generic.go:334] "Generic (PLEG): container finished" podID="6bcafa21-00bc-4d37-9294-c3f378c43012" containerID="e567062079a537ea46752bd7a9d3395d5a041cb7eb9d23fc0e08f21d6e29934f" exitCode=0 Feb 16 17:20:30 crc kubenswrapper[4870]: I0216 17:20:30.243152 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="709bdb2a-907c-41ab-bafc-08979f79771e" path="/var/lib/kubelet/pods/709bdb2a-907c-41ab-bafc-08979f79771e/volumes" Feb 16 17:20:30 crc kubenswrapper[4870]: I0216 17:20:30.243711 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eae2f115-138b-415a-b173-6205d02ab9af" path="/var/lib/kubelet/pods/eae2f115-138b-415a-b173-6205d02ab9af/volumes" Feb 16 17:20:30 crc kubenswrapper[4870]: I0216 17:20:30.244141 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" event={"ID":"6bcafa21-00bc-4d37-9294-c3f378c43012","Type":"ContainerDied","Data":"e567062079a537ea46752bd7a9d3395d5a041cb7eb9d23fc0e08f21d6e29934f"} Feb 16 17:20:30 crc kubenswrapper[4870]: E0216 17:20:30.258334 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:20:32 crc kubenswrapper[4870]: I0216 17:20:32.275249 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" event={"ID":"6bcafa21-00bc-4d37-9294-c3f378c43012","Type":"ContainerStarted","Data":"5d3761c76b23160a41ff645851e1bb97d3018801d6cc6692ea9284298ca7cf52"} Feb 16 17:20:32 crc kubenswrapper[4870]: I0216 17:20:32.276509 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:32 crc kubenswrapper[4870]: I0216 17:20:32.309491 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" podStartSLOduration=6.309472401 podStartE2EDuration="6.309472401s" podCreationTimestamp="2026-02-16 17:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:20:32.308183595 +0000 UTC m=+1236.791647979" watchObservedRunningTime="2026-02-16 17:20:32.309472401 +0000 UTC m=+1236.792936785" Feb 16 17:20:34 crc kubenswrapper[4870]: I0216 17:20:34.135994 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:35 crc kubenswrapper[4870]: I0216 17:20:35.325099 4870 generic.go:334] "Generic (PLEG): container finished" podID="84df055c-e479-445e-843b-eb84b43e3f7d" containerID="3f8171683589800873f30d0092fa287a67395ef35b531b5c45043adbdd173150" exitCode=0 Feb 16 17:20:35 crc kubenswrapper[4870]: I0216 17:20:35.325203 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-shbhr" event={"ID":"84df055c-e479-445e-843b-eb84b43e3f7d","Type":"ContainerDied","Data":"3f8171683589800873f30d0092fa287a67395ef35b531b5c45043adbdd173150"} Feb 16 17:20:37 crc kubenswrapper[4870]: I0216 17:20:37.535120 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:20:37 crc kubenswrapper[4870]: I0216 17:20:37.601716 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xf2lr"] Feb 16 17:20:37 crc kubenswrapper[4870]: I0216 17:20:37.602656 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-xf2lr" podUID="c3bc8c41-0b58-4a15-adf0-698dcbf23806" containerName="dnsmasq-dns" containerID="cri-o://3902d0a093480de91f748b31235dbd0b8acbd02d2b81fb835eb91a91a66388b0" gracePeriod=10 Feb 16 17:20:38 crc kubenswrapper[4870]: I0216 17:20:38.365983 4870 generic.go:334] "Generic (PLEG): container finished" podID="c3bc8c41-0b58-4a15-adf0-698dcbf23806" containerID="3902d0a093480de91f748b31235dbd0b8acbd02d2b81fb835eb91a91a66388b0" exitCode=0 Feb 16 17:20:38 crc kubenswrapper[4870]: I0216 17:20:38.366064 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xf2lr" event={"ID":"c3bc8c41-0b58-4a15-adf0-698dcbf23806","Type":"ContainerDied","Data":"3902d0a093480de91f748b31235dbd0b8acbd02d2b81fb835eb91a91a66388b0"} Feb 16 17:20:40 crc kubenswrapper[4870]: I0216 17:20:40.979681 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:40 crc kubenswrapper[4870]: I0216 17:20:40.988975 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-xf2lr" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.014104 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-fernet-keys\") pod \"84df055c-e479-445e-843b-eb84b43e3f7d\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.014456 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-config-data\") pod \"84df055c-e479-445e-843b-eb84b43e3f7d\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.014503 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87h74\" (UniqueName: \"kubernetes.io/projected/84df055c-e479-445e-843b-eb84b43e3f7d-kube-api-access-87h74\") pod \"84df055c-e479-445e-843b-eb84b43e3f7d\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.014609 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-credential-keys\") pod \"84df055c-e479-445e-843b-eb84b43e3f7d\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.014726 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-scripts\") pod \"84df055c-e479-445e-843b-eb84b43e3f7d\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.014744 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-combined-ca-bundle\") pod \"84df055c-e479-445e-843b-eb84b43e3f7d\" (UID: \"84df055c-e479-445e-843b-eb84b43e3f7d\") " Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.043001 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "84df055c-e479-445e-843b-eb84b43e3f7d" (UID: "84df055c-e479-445e-843b-eb84b43e3f7d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.052272 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "84df055c-e479-445e-843b-eb84b43e3f7d" (UID: "84df055c-e479-445e-843b-eb84b43e3f7d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.054517 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-scripts" (OuterVolumeSpecName: "scripts") pod "84df055c-e479-445e-843b-eb84b43e3f7d" (UID: "84df055c-e479-445e-843b-eb84b43e3f7d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.056979 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84df055c-e479-445e-843b-eb84b43e3f7d-kube-api-access-87h74" (OuterVolumeSpecName: "kube-api-access-87h74") pod "84df055c-e479-445e-843b-eb84b43e3f7d" (UID: "84df055c-e479-445e-843b-eb84b43e3f7d"). InnerVolumeSpecName "kube-api-access-87h74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.060898 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84df055c-e479-445e-843b-eb84b43e3f7d" (UID: "84df055c-e479-445e-843b-eb84b43e3f7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.063409 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-config-data" (OuterVolumeSpecName: "config-data") pod "84df055c-e479-445e-843b-eb84b43e3f7d" (UID: "84df055c-e479-445e-843b-eb84b43e3f7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.116996 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-config\") pod \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\" (UID: \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\") " Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.117364 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-ovsdbserver-nb\") pod \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\" (UID: \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\") " Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.117459 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xr4n\" (UniqueName: \"kubernetes.io/projected/c3bc8c41-0b58-4a15-adf0-698dcbf23806-kube-api-access-9xr4n\") pod \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\" (UID: \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\") " Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.117715 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-ovsdbserver-sb\") pod \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\" (UID: \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\") " Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.117743 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-dns-svc\") pod \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\" (UID: \"c3bc8c41-0b58-4a15-adf0-698dcbf23806\") " Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.118341 4870 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.118365 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.118377 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.118397 4870 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.118410 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84df055c-e479-445e-843b-eb84b43e3f7d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.118422 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87h74\" (UniqueName: \"kubernetes.io/projected/84df055c-e479-445e-843b-eb84b43e3f7d-kube-api-access-87h74\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.140161 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3bc8c41-0b58-4a15-adf0-698dcbf23806-kube-api-access-9xr4n" (OuterVolumeSpecName: "kube-api-access-9xr4n") pod "c3bc8c41-0b58-4a15-adf0-698dcbf23806" (UID: "c3bc8c41-0b58-4a15-adf0-698dcbf23806"). InnerVolumeSpecName "kube-api-access-9xr4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.169346 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-config" (OuterVolumeSpecName: "config") pod "c3bc8c41-0b58-4a15-adf0-698dcbf23806" (UID: "c3bc8c41-0b58-4a15-adf0-698dcbf23806"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.183277 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c3bc8c41-0b58-4a15-adf0-698dcbf23806" (UID: "c3bc8c41-0b58-4a15-adf0-698dcbf23806"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.191885 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3bc8c41-0b58-4a15-adf0-698dcbf23806" (UID: "c3bc8c41-0b58-4a15-adf0-698dcbf23806"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.198715 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c3bc8c41-0b58-4a15-adf0-698dcbf23806" (UID: "c3bc8c41-0b58-4a15-adf0-698dcbf23806"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.221046 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xr4n\" (UniqueName: \"kubernetes.io/projected/c3bc8c41-0b58-4a15-adf0-698dcbf23806-kube-api-access-9xr4n\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.221084 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.221096 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.221110 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.221122 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3bc8c41-0b58-4a15-adf0-698dcbf23806-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.393333 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-shbhr" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.393357 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-shbhr" event={"ID":"84df055c-e479-445e-843b-eb84b43e3f7d","Type":"ContainerDied","Data":"600f6d520d7cdf0f232be4eab0d91247d68d8fc23784074e67a76d9067738e67"} Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.393398 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="600f6d520d7cdf0f232be4eab0d91247d68d8fc23784074e67a76d9067738e67" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.396618 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xf2lr" event={"ID":"c3bc8c41-0b58-4a15-adf0-698dcbf23806","Type":"ContainerDied","Data":"d9d3343b6360dd600a7ac6638a5dc0b34467353d6fc864da5380d6b8c65c1541"} Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.396669 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-xf2lr" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.396708 4870 scope.go:117] "RemoveContainer" containerID="3902d0a093480de91f748b31235dbd0b8acbd02d2b81fb835eb91a91a66388b0" Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.441816 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xf2lr"] Feb 16 17:20:41 crc kubenswrapper[4870]: I0216 17:20:41.450345 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xf2lr"] Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.059985 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-shbhr"] Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.068156 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-shbhr"] Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.205888 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-7qf8f"] Feb 16 17:20:42 crc kubenswrapper[4870]: E0216 17:20:42.206296 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="709bdb2a-907c-41ab-bafc-08979f79771e" containerName="init" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.206314 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="709bdb2a-907c-41ab-bafc-08979f79771e" containerName="init" Feb 16 17:20:42 crc kubenswrapper[4870]: E0216 17:20:42.206336 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84df055c-e479-445e-843b-eb84b43e3f7d" containerName="keystone-bootstrap" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.206342 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="84df055c-e479-445e-843b-eb84b43e3f7d" containerName="keystone-bootstrap" Feb 16 17:20:42 crc kubenswrapper[4870]: E0216 17:20:42.206355 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3bc8c41-0b58-4a15-adf0-698dcbf23806" containerName="init" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.206360 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3bc8c41-0b58-4a15-adf0-698dcbf23806" containerName="init" Feb 16 17:20:42 crc kubenswrapper[4870]: E0216 17:20:42.206373 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3bc8c41-0b58-4a15-adf0-698dcbf23806" containerName="dnsmasq-dns" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.206379 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3bc8c41-0b58-4a15-adf0-698dcbf23806" containerName="dnsmasq-dns" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.206566 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="84df055c-e479-445e-843b-eb84b43e3f7d" containerName="keystone-bootstrap" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.206580 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3bc8c41-0b58-4a15-adf0-698dcbf23806" containerName="dnsmasq-dns" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.206590 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="709bdb2a-907c-41ab-bafc-08979f79771e" containerName="init" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.207231 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.210205 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.210361 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.210211 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.210716 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pm4j6" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.237711 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84df055c-e479-445e-843b-eb84b43e3f7d" path="/var/lib/kubelet/pods/84df055c-e479-445e-843b-eb84b43e3f7d/volumes" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.238589 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3bc8c41-0b58-4a15-adf0-698dcbf23806" path="/var/lib/kubelet/pods/c3bc8c41-0b58-4a15-adf0-698dcbf23806/volumes" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.239282 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7qf8f"] Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.244126 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-credential-keys\") pod \"keystone-bootstrap-7qf8f\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.244220 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-fernet-keys\") pod \"keystone-bootstrap-7qf8f\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.244245 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-scripts\") pod \"keystone-bootstrap-7qf8f\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.244272 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-combined-ca-bundle\") pod \"keystone-bootstrap-7qf8f\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.244305 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-config-data\") pod \"keystone-bootstrap-7qf8f\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.244323 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnzgb\" (UniqueName: \"kubernetes.io/projected/e04460b7-407f-4474-bf99-264869cf6529-kube-api-access-gnzgb\") pod \"keystone-bootstrap-7qf8f\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:20:42 crc kubenswrapper[4870]: E0216 17:20:42.342704 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:20:42 crc kubenswrapper[4870]: E0216 17:20:42.342769 4870 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:20:42 crc kubenswrapper[4870]: E0216 17:20:42.342906 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7b2p7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6hkdm_openstack(34a86750-1fff-4add-8462-7ab805ec7f89): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:20:42 crc kubenswrapper[4870]: E0216 17:20:42.349110 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.350965 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-credential-keys\") pod \"keystone-bootstrap-7qf8f\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.351107 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-fernet-keys\") pod \"keystone-bootstrap-7qf8f\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.351144 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-scripts\") pod \"keystone-bootstrap-7qf8f\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.351179 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-combined-ca-bundle\") pod \"keystone-bootstrap-7qf8f\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.351227 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-config-data\") pod \"keystone-bootstrap-7qf8f\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.351248 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnzgb\" (UniqueName: \"kubernetes.io/projected/e04460b7-407f-4474-bf99-264869cf6529-kube-api-access-gnzgb\") pod \"keystone-bootstrap-7qf8f\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.358810 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-combined-ca-bundle\") pod \"keystone-bootstrap-7qf8f\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.360539 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-fernet-keys\") pod \"keystone-bootstrap-7qf8f\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.361294 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-credential-keys\") pod \"keystone-bootstrap-7qf8f\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.361344 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-scripts\") pod \"keystone-bootstrap-7qf8f\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.364364 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-config-data\") pod \"keystone-bootstrap-7qf8f\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.370035 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnzgb\" (UniqueName: \"kubernetes.io/projected/e04460b7-407f-4474-bf99-264869cf6529-kube-api-access-gnzgb\") pod \"keystone-bootstrap-7qf8f\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:20:42 crc kubenswrapper[4870]: I0216 17:20:42.524759 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:20:43 crc kubenswrapper[4870]: I0216 17:20:43.433884 4870 generic.go:334] "Generic (PLEG): container finished" podID="998e2386-0941-4f2b-8e23-d77138831ad4" containerID="5e2622801776ff1c1cd43fd2ec2e7f94f8dcbc4d95b6b312535e6b3a306936fe" exitCode=0 Feb 16 17:20:43 crc kubenswrapper[4870]: I0216 17:20:43.433983 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5r2tl" event={"ID":"998e2386-0941-4f2b-8e23-d77138831ad4","Type":"ContainerDied","Data":"5e2622801776ff1c1cd43fd2ec2e7f94f8dcbc4d95b6b312535e6b3a306936fe"} Feb 16 17:20:44 crc kubenswrapper[4870]: I0216 17:20:44.135976 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:44 crc kubenswrapper[4870]: I0216 17:20:44.141787 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:44 crc kubenswrapper[4870]: I0216 17:20:44.449939 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 17:20:47 crc kubenswrapper[4870]: I0216 17:20:47.479455 4870 generic.go:334] "Generic (PLEG): container finished" podID="913c8c11-d196-4f95-9aba-a4552bcbef88" containerID="2a6bc0ee4027889558f6b7a4fce9de3b3296fcd1cfa1a1b7bb384094461632ce" exitCode=0 Feb 16 17:20:47 crc kubenswrapper[4870]: I0216 17:20:47.479843 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vmbrl" event={"ID":"913c8c11-d196-4f95-9aba-a4552bcbef88","Type":"ContainerDied","Data":"2a6bc0ee4027889558f6b7a4fce9de3b3296fcd1cfa1a1b7bb384094461632ce"} Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.332817 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5r2tl" Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.341189 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vmbrl" Feb 16 17:20:50 crc kubenswrapper[4870]: E0216 17:20:50.382037 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 16 17:20:50 crc kubenswrapper[4870]: E0216 17:20:50.382173 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd5h5f6h59h5c4h64dhb8h665h5b8h56h647h7ch59fh668h68dhd9h67bh656h658hb9h5cch65chf5h9ch585h589h656hb5h5cfh56h557hch5bfq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nnjmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(38836e81-1b99-4b50-ada2-40727db1f248): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.438703 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh42k\" (UniqueName: \"kubernetes.io/projected/913c8c11-d196-4f95-9aba-a4552bcbef88-kube-api-access-vh42k\") pod \"913c8c11-d196-4f95-9aba-a4552bcbef88\" (UID: \"913c8c11-d196-4f95-9aba-a4552bcbef88\") " Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.438759 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913c8c11-d196-4f95-9aba-a4552bcbef88-combined-ca-bundle\") pod \"913c8c11-d196-4f95-9aba-a4552bcbef88\" (UID: \"913c8c11-d196-4f95-9aba-a4552bcbef88\") " Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.438850 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/913c8c11-d196-4f95-9aba-a4552bcbef88-config\") pod \"913c8c11-d196-4f95-9aba-a4552bcbef88\" (UID: \"913c8c11-d196-4f95-9aba-a4552bcbef88\") " Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.438887 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998e2386-0941-4f2b-8e23-d77138831ad4-config-data\") pod \"998e2386-0941-4f2b-8e23-d77138831ad4\" (UID: \"998e2386-0941-4f2b-8e23-d77138831ad4\") " Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.438939 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998e2386-0941-4f2b-8e23-d77138831ad4-combined-ca-bundle\") pod \"998e2386-0941-4f2b-8e23-d77138831ad4\" (UID: \"998e2386-0941-4f2b-8e23-d77138831ad4\") " Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.439030 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rsdg\" (UniqueName: \"kubernetes.io/projected/998e2386-0941-4f2b-8e23-d77138831ad4-kube-api-access-5rsdg\") pod \"998e2386-0941-4f2b-8e23-d77138831ad4\" (UID: \"998e2386-0941-4f2b-8e23-d77138831ad4\") " Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.439055 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/998e2386-0941-4f2b-8e23-d77138831ad4-db-sync-config-data\") pod \"998e2386-0941-4f2b-8e23-d77138831ad4\" (UID: \"998e2386-0941-4f2b-8e23-d77138831ad4\") " Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.445660 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/913c8c11-d196-4f95-9aba-a4552bcbef88-kube-api-access-vh42k" (OuterVolumeSpecName: "kube-api-access-vh42k") pod "913c8c11-d196-4f95-9aba-a4552bcbef88" (UID: "913c8c11-d196-4f95-9aba-a4552bcbef88"). InnerVolumeSpecName "kube-api-access-vh42k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.446483 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/998e2386-0941-4f2b-8e23-d77138831ad4-kube-api-access-5rsdg" (OuterVolumeSpecName: "kube-api-access-5rsdg") pod "998e2386-0941-4f2b-8e23-d77138831ad4" (UID: "998e2386-0941-4f2b-8e23-d77138831ad4"). InnerVolumeSpecName "kube-api-access-5rsdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.458351 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/998e2386-0941-4f2b-8e23-d77138831ad4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "998e2386-0941-4f2b-8e23-d77138831ad4" (UID: "998e2386-0941-4f2b-8e23-d77138831ad4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.475625 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/998e2386-0941-4f2b-8e23-d77138831ad4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "998e2386-0941-4f2b-8e23-d77138831ad4" (UID: "998e2386-0941-4f2b-8e23-d77138831ad4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.477743 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913c8c11-d196-4f95-9aba-a4552bcbef88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "913c8c11-d196-4f95-9aba-a4552bcbef88" (UID: "913c8c11-d196-4f95-9aba-a4552bcbef88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.481132 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913c8c11-d196-4f95-9aba-a4552bcbef88-config" (OuterVolumeSpecName: "config") pod "913c8c11-d196-4f95-9aba-a4552bcbef88" (UID: "913c8c11-d196-4f95-9aba-a4552bcbef88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.493107 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/998e2386-0941-4f2b-8e23-d77138831ad4-config-data" (OuterVolumeSpecName: "config-data") pod "998e2386-0941-4f2b-8e23-d77138831ad4" (UID: "998e2386-0941-4f2b-8e23-d77138831ad4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.522646 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5r2tl" event={"ID":"998e2386-0941-4f2b-8e23-d77138831ad4","Type":"ContainerDied","Data":"c7a7e3522f626f904eb99bdb545b15d439a783cf30da8648720780fb9c554597"} Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.522710 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7a7e3522f626f904eb99bdb545b15d439a783cf30da8648720780fb9c554597" Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.522769 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5r2tl" Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.533842 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vmbrl" event={"ID":"913c8c11-d196-4f95-9aba-a4552bcbef88","Type":"ContainerDied","Data":"ae41561c78e90ecb180d7bd4b04a7f470c2b13af6b9fefae0fc379889019f5c2"} Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.533895 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae41561c78e90ecb180d7bd4b04a7f470c2b13af6b9fefae0fc379889019f5c2" Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.534012 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vmbrl" Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.541577 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rsdg\" (UniqueName: \"kubernetes.io/projected/998e2386-0941-4f2b-8e23-d77138831ad4-kube-api-access-5rsdg\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.541603 4870 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/998e2386-0941-4f2b-8e23-d77138831ad4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.541616 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh42k\" (UniqueName: \"kubernetes.io/projected/913c8c11-d196-4f95-9aba-a4552bcbef88-kube-api-access-vh42k\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.541627 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913c8c11-d196-4f95-9aba-a4552bcbef88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.541635 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/913c8c11-d196-4f95-9aba-a4552bcbef88-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.541645 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998e2386-0941-4f2b-8e23-d77138831ad4-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:50 crc kubenswrapper[4870]: I0216 17:20:50.541654 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998e2386-0941-4f2b-8e23-d77138831ad4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.630583 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-dwgsh"] Feb 16 17:20:51 crc kubenswrapper[4870]: E0216 17:20:51.631282 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913c8c11-d196-4f95-9aba-a4552bcbef88" containerName="neutron-db-sync" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.631297 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="913c8c11-d196-4f95-9aba-a4552bcbef88" containerName="neutron-db-sync" Feb 16 17:20:51 crc kubenswrapper[4870]: E0216 17:20:51.631320 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="998e2386-0941-4f2b-8e23-d77138831ad4" containerName="glance-db-sync" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.631328 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="998e2386-0941-4f2b-8e23-d77138831ad4" containerName="glance-db-sync" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.631504 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="913c8c11-d196-4f95-9aba-a4552bcbef88" containerName="neutron-db-sync" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.631553 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="998e2386-0941-4f2b-8e23-d77138831ad4" containerName="glance-db-sync" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.632604 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.663582 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-dwgsh"] Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.672471 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnps9\" (UniqueName: \"kubernetes.io/projected/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-kube-api-access-xnps9\") pod \"dnsmasq-dns-7d88d7b95f-dwgsh\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.672611 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-ovsdbserver-nb\") pod \"dnsmasq-dns-7d88d7b95f-dwgsh\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.672654 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-config\") pod \"dnsmasq-dns-7d88d7b95f-dwgsh\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.672867 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-dns-svc\") pod \"dnsmasq-dns-7d88d7b95f-dwgsh\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.672915 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-ovsdbserver-sb\") pod \"dnsmasq-dns-7d88d7b95f-dwgsh\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.673196 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-dns-swift-storage-0\") pod \"dnsmasq-dns-7d88d7b95f-dwgsh\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.739533 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-686cd77f6d-7xrcx"] Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.741140 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-686cd77f6d-7xrcx" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.746281 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.746445 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qr49x" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.746609 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.746751 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.765734 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-686cd77f6d-7xrcx"] Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.775445 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-ovndb-tls-certs\") pod \"neutron-686cd77f6d-7xrcx\" (UID: \"0de98d85-29b8-44b5-b120-72c0c42e4714\") " pod="openstack/neutron-686cd77f6d-7xrcx" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.775507 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-dns-swift-storage-0\") pod \"dnsmasq-dns-7d88d7b95f-dwgsh\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.775535 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnps9\" (UniqueName: \"kubernetes.io/projected/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-kube-api-access-xnps9\") pod \"dnsmasq-dns-7d88d7b95f-dwgsh\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.775600 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwrk4\" (UniqueName: \"kubernetes.io/projected/0de98d85-29b8-44b5-b120-72c0c42e4714-kube-api-access-wwrk4\") pod \"neutron-686cd77f6d-7xrcx\" (UID: \"0de98d85-29b8-44b5-b120-72c0c42e4714\") " pod="openstack/neutron-686cd77f6d-7xrcx" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.775657 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-config\") pod \"neutron-686cd77f6d-7xrcx\" (UID: \"0de98d85-29b8-44b5-b120-72c0c42e4714\") " pod="openstack/neutron-686cd77f6d-7xrcx" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.775679 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-ovsdbserver-nb\") pod \"dnsmasq-dns-7d88d7b95f-dwgsh\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.775700 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-config\") pod \"dnsmasq-dns-7d88d7b95f-dwgsh\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.775809 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-combined-ca-bundle\") pod \"neutron-686cd77f6d-7xrcx\" (UID: \"0de98d85-29b8-44b5-b120-72c0c42e4714\") " pod="openstack/neutron-686cd77f6d-7xrcx" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.775834 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-dns-svc\") pod \"dnsmasq-dns-7d88d7b95f-dwgsh\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.775854 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-ovsdbserver-sb\") pod \"dnsmasq-dns-7d88d7b95f-dwgsh\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.776104 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-httpd-config\") pod \"neutron-686cd77f6d-7xrcx\" (UID: \"0de98d85-29b8-44b5-b120-72c0c42e4714\") " pod="openstack/neutron-686cd77f6d-7xrcx" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.776708 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-config\") pod \"dnsmasq-dns-7d88d7b95f-dwgsh\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.776842 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-ovsdbserver-sb\") pod \"dnsmasq-dns-7d88d7b95f-dwgsh\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.776939 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-ovsdbserver-nb\") pod \"dnsmasq-dns-7d88d7b95f-dwgsh\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.777063 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-dns-swift-storage-0\") pod \"dnsmasq-dns-7d88d7b95f-dwgsh\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.777078 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-dns-svc\") pod \"dnsmasq-dns-7d88d7b95f-dwgsh\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.825544 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnps9\" (UniqueName: \"kubernetes.io/projected/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-kube-api-access-xnps9\") pod \"dnsmasq-dns-7d88d7b95f-dwgsh\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.877393 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-combined-ca-bundle\") pod \"neutron-686cd77f6d-7xrcx\" (UID: \"0de98d85-29b8-44b5-b120-72c0c42e4714\") " pod="openstack/neutron-686cd77f6d-7xrcx" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.877750 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-httpd-config\") pod \"neutron-686cd77f6d-7xrcx\" (UID: \"0de98d85-29b8-44b5-b120-72c0c42e4714\") " pod="openstack/neutron-686cd77f6d-7xrcx" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.877793 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-ovndb-tls-certs\") pod \"neutron-686cd77f6d-7xrcx\" (UID: \"0de98d85-29b8-44b5-b120-72c0c42e4714\") " pod="openstack/neutron-686cd77f6d-7xrcx" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.877863 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwrk4\" (UniqueName: \"kubernetes.io/projected/0de98d85-29b8-44b5-b120-72c0c42e4714-kube-api-access-wwrk4\") pod \"neutron-686cd77f6d-7xrcx\" (UID: \"0de98d85-29b8-44b5-b120-72c0c42e4714\") " pod="openstack/neutron-686cd77f6d-7xrcx" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.877914 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-config\") pod \"neutron-686cd77f6d-7xrcx\" (UID: \"0de98d85-29b8-44b5-b120-72c0c42e4714\") " pod="openstack/neutron-686cd77f6d-7xrcx" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.881806 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-combined-ca-bundle\") pod \"neutron-686cd77f6d-7xrcx\" (UID: \"0de98d85-29b8-44b5-b120-72c0c42e4714\") " pod="openstack/neutron-686cd77f6d-7xrcx" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.887513 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-dwgsh"] Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.893917 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.896800 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-httpd-config\") pod \"neutron-686cd77f6d-7xrcx\" (UID: \"0de98d85-29b8-44b5-b120-72c0c42e4714\") " pod="openstack/neutron-686cd77f6d-7xrcx" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.897312 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-ovndb-tls-certs\") pod \"neutron-686cd77f6d-7xrcx\" (UID: \"0de98d85-29b8-44b5-b120-72c0c42e4714\") " pod="openstack/neutron-686cd77f6d-7xrcx" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.899378 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-config\") pod \"neutron-686cd77f6d-7xrcx\" (UID: \"0de98d85-29b8-44b5-b120-72c0c42e4714\") " pod="openstack/neutron-686cd77f6d-7xrcx" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.905189 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwrk4\" (UniqueName: \"kubernetes.io/projected/0de98d85-29b8-44b5-b120-72c0c42e4714-kube-api-access-wwrk4\") pod \"neutron-686cd77f6d-7xrcx\" (UID: \"0de98d85-29b8-44b5-b120-72c0c42e4714\") " pod="openstack/neutron-686cd77f6d-7xrcx" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.919188 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-9kr25"] Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.921288 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.949253 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-9kr25"] Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.980131 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-dns-svc\") pod \"dnsmasq-dns-55f844cf75-9kr25\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.980172 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-9kr25\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.980261 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-9kr25\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.980293 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-config\") pod \"dnsmasq-dns-55f844cf75-9kr25\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.980320 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-9kr25\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:20:51 crc kubenswrapper[4870]: I0216 17:20:51.980342 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5269z\" (UniqueName: \"kubernetes.io/projected/b8e4be99-05cc-436e-9634-b6302dc49fa5-kube-api-access-5269z\") pod \"dnsmasq-dns-55f844cf75-9kr25\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.068602 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-686cd77f6d-7xrcx" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.081493 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-dns-svc\") pod \"dnsmasq-dns-55f844cf75-9kr25\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.081532 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-9kr25\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.081623 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-9kr25\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.081660 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-config\") pod \"dnsmasq-dns-55f844cf75-9kr25\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.081682 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-9kr25\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.081713 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5269z\" (UniqueName: \"kubernetes.io/projected/b8e4be99-05cc-436e-9634-b6302dc49fa5-kube-api-access-5269z\") pod \"dnsmasq-dns-55f844cf75-9kr25\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.082800 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-dns-svc\") pod \"dnsmasq-dns-55f844cf75-9kr25\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.082864 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-9kr25\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.082902 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-9kr25\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.083013 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-config\") pod \"dnsmasq-dns-55f844cf75-9kr25\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.083419 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-9kr25\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.122349 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5269z\" (UniqueName: \"kubernetes.io/projected/b8e4be99-05cc-436e-9634-b6302dc49fa5-kube-api-access-5269z\") pod \"dnsmasq-dns-55f844cf75-9kr25\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.278860 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.651241 4870 scope.go:117] "RemoveContainer" containerID="39303fef85f7be6111427c24631a5cc06c369fa79aafa84d12499587fab2cda3" Feb 16 17:20:52 crc kubenswrapper[4870]: E0216 17:20:52.675790 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 16 17:20:52 crc kubenswrapper[4870]: E0216 17:20:52.676057 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhj74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-4mwgd_openstack(9719dd82-cec9-4a56-ae93-29ccca75a3ef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 17:20:52 crc kubenswrapper[4870]: E0216 17:20:52.677370 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-4mwgd" podUID="9719dd82-cec9-4a56-ae93-29ccca75a3ef" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.741118 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.742841 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.746787 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.747349 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wpt8f" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.747730 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.759272 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.801372 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2b447ca3-eb84-4df0-98c5-b825a71e47bb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.801429 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b447ca3-eb84-4df0-98c5-b825a71e47bb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.801460 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b447ca3-eb84-4df0-98c5-b825a71e47bb-config-data\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.801541 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcskn\" (UniqueName: \"kubernetes.io/projected/2b447ca3-eb84-4df0-98c5-b825a71e47bb-kube-api-access-wcskn\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.801579 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b447ca3-eb84-4df0-98c5-b825a71e47bb-logs\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.801612 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.801649 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b447ca3-eb84-4df0-98c5-b825a71e47bb-scripts\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.903475 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcskn\" (UniqueName: \"kubernetes.io/projected/2b447ca3-eb84-4df0-98c5-b825a71e47bb-kube-api-access-wcskn\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.903960 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b447ca3-eb84-4df0-98c5-b825a71e47bb-logs\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.904000 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.904035 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b447ca3-eb84-4df0-98c5-b825a71e47bb-scripts\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.904180 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2b447ca3-eb84-4df0-98c5-b825a71e47bb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.904194 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b447ca3-eb84-4df0-98c5-b825a71e47bb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.904210 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b447ca3-eb84-4df0-98c5-b825a71e47bb-config-data\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.904931 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b447ca3-eb84-4df0-98c5-b825a71e47bb-logs\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.909262 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2b447ca3-eb84-4df0-98c5-b825a71e47bb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.920067 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b447ca3-eb84-4df0-98c5-b825a71e47bb-config-data\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.925247 4870 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.925517 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a6e7916025a705e37bf42169fdb7099d11d639aaccb3a9ce061702c007eb46f2/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.932173 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b447ca3-eb84-4df0-98c5-b825a71e47bb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.932895 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b447ca3-eb84-4df0-98c5-b825a71e47bb-scripts\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.936577 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcskn\" (UniqueName: \"kubernetes.io/projected/2b447ca3-eb84-4df0-98c5-b825a71e47bb-kube-api-access-wcskn\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " pod="openstack/glance-default-external-api-0" Feb 16 17:20:52 crc kubenswrapper[4870]: I0216 17:20:52.980928 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\") pod \"glance-default-external-api-0\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " pod="openstack/glance-default-external-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.063490 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.099629 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.104033 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.113470 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.115548 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.217751 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-config-data\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.218090 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-logs\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.218115 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.218145 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55js5\" (UniqueName: \"kubernetes.io/projected/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-kube-api-access-55js5\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.218165 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-scripts\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.218319 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-00d7366f-6279-474d-83f9-372df7eb27ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.218350 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.320600 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-config-data\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.320673 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-logs\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.320709 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.320755 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55js5\" (UniqueName: \"kubernetes.io/projected/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-kube-api-access-55js5\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.320784 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-scripts\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.320824 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-00d7366f-6279-474d-83f9-372df7eb27ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.320856 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.321246 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-logs\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.321837 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.323699 4870 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.323728 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-00d7366f-6279-474d-83f9-372df7eb27ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3ef4ca648708bfc498381351edc504adf309a14cbb60cff0c6075d7e8a16e973/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.329988 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-scripts\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.329987 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.331213 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-config-data\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.343637 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55js5\" (UniqueName: \"kubernetes.io/projected/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-kube-api-access-55js5\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.388287 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-00d7366f-6279-474d-83f9-372df7eb27ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae\") pod \"glance-default-internal-api-0\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.469358 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7qf8f"] Feb 16 17:20:53 crc kubenswrapper[4870]: W0216 17:20:53.475146 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode04460b7_407f_4474_bf99_264869cf6529.slice/crio-7368b7325511162f9b1cf53b1f692d9fd6110aefb17d192c14b28ca4d39ed2b4 WatchSource:0}: Error finding container 7368b7325511162f9b1cf53b1f692d9fd6110aefb17d192c14b28ca4d39ed2b4: Status 404 returned error can't find the container with id 7368b7325511162f9b1cf53b1f692d9fd6110aefb17d192c14b28ca4d39ed2b4 Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.480684 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-dwgsh"] Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.516675 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.585083 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-686cd77f6d-7xrcx"] Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.593359 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7qf8f" event={"ID":"e04460b7-407f-4474-bf99-264869cf6529","Type":"ContainerStarted","Data":"7368b7325511162f9b1cf53b1f692d9fd6110aefb17d192c14b28ca4d39ed2b4"} Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.623278 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" event={"ID":"436d1b04-0d4b-4f16-9c58-b44635f2fc7e","Type":"ContainerStarted","Data":"137aebdfcf6feead379443531e5934e4477a876b6acc2a538388608e25cbbe9c"} Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.633412 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9n2tj" event={"ID":"6c6489e4-d44c-4e7d-a451-620da210060e","Type":"ContainerStarted","Data":"9372cf886b3676a5f4ee950cb13b4255a9c9461c9d5001be205fec2bfb180c6f"} Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.657312 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s4xns" event={"ID":"375ecf8f-1d93-40fb-85dc-c0eabcef46c3","Type":"ContainerStarted","Data":"957836a0ae9aac98f89a454b9eaf0bbd4596d3ba03ecffa8d4c32a70e7df8d08"} Feb 16 17:20:53 crc kubenswrapper[4870]: E0216 17:20:53.667717 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-4mwgd" podUID="9719dd82-cec9-4a56-ae93-29ccca75a3ef" Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.685759 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-9n2tj" podStartSLOduration=3.846637935 podStartE2EDuration="27.685735764s" podCreationTimestamp="2026-02-16 17:20:26 +0000 UTC" firstStartedPulling="2026-02-16 17:20:28.7876352 +0000 UTC m=+1233.271099584" lastFinishedPulling="2026-02-16 17:20:52.626733029 +0000 UTC m=+1257.110197413" observedRunningTime="2026-02-16 17:20:53.652362062 +0000 UTC m=+1258.135826446" watchObservedRunningTime="2026-02-16 17:20:53.685735764 +0000 UTC m=+1258.169200148" Feb 16 17:20:53 crc kubenswrapper[4870]: W0216 17:20:53.698422 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8e4be99_05cc_436e_9634_b6302dc49fa5.slice/crio-9442dbe80aae68e00f21c0039010eb83f993b893c22fcfddc25cbfa6c4634f56 WatchSource:0}: Error finding container 9442dbe80aae68e00f21c0039010eb83f993b893c22fcfddc25cbfa6c4634f56: Status 404 returned error can't find the container with id 9442dbe80aae68e00f21c0039010eb83f993b893c22fcfddc25cbfa6c4634f56 Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.740441 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-9kr25"] Feb 16 17:20:53 crc kubenswrapper[4870]: I0216 17:20:53.746600 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-s4xns" podStartSLOduration=3.855860298 podStartE2EDuration="27.74657645s" podCreationTimestamp="2026-02-16 17:20:26 +0000 UTC" firstStartedPulling="2026-02-16 17:20:28.815176725 +0000 UTC m=+1233.298641109" lastFinishedPulling="2026-02-16 17:20:52.705892877 +0000 UTC m=+1257.189357261" observedRunningTime="2026-02-16 17:20:53.678106257 +0000 UTC m=+1258.161570651" watchObservedRunningTime="2026-02-16 17:20:53.74657645 +0000 UTC m=+1258.230040834" Feb 16 17:20:54 crc kubenswrapper[4870]: I0216 17:20:54.000263 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:20:54 crc kubenswrapper[4870]: I0216 17:20:54.263813 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:20:54 crc kubenswrapper[4870]: E0216 17:20:54.344457 4870 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod436d1b04_0d4b_4f16_9c58_b44635f2fc7e.slice/crio-17c24b966727997ab44a7b26a429a28522eb43bff91da21080c97e81edf6e6d9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod436d1b04_0d4b_4f16_9c58_b44635f2fc7e.slice/crio-conmon-17c24b966727997ab44a7b26a429a28522eb43bff91da21080c97e81edf6e6d9.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:20:54 crc kubenswrapper[4870]: I0216 17:20:54.680180 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-686cd77f6d-7xrcx" event={"ID":"0de98d85-29b8-44b5-b120-72c0c42e4714","Type":"ContainerStarted","Data":"0ec551d28975d88a0487efcaac2b828708eda530e96b131da31719005a4fffed"} Feb 16 17:20:54 crc kubenswrapper[4870]: I0216 17:20:54.693807 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-9kr25" event={"ID":"b8e4be99-05cc-436e-9634-b6302dc49fa5","Type":"ContainerStarted","Data":"9442dbe80aae68e00f21c0039010eb83f993b893c22fcfddc25cbfa6c4634f56"} Feb 16 17:20:54 crc kubenswrapper[4870]: I0216 17:20:54.710513 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7qf8f" event={"ID":"e04460b7-407f-4474-bf99-264869cf6529","Type":"ContainerStarted","Data":"9458b2d325f9566563ba2c68b0534d1b1d5b17072d854677ad3dea909e2e35f2"} Feb 16 17:20:54 crc kubenswrapper[4870]: I0216 17:20:54.728121 4870 generic.go:334] "Generic (PLEG): container finished" podID="436d1b04-0d4b-4f16-9c58-b44635f2fc7e" containerID="17c24b966727997ab44a7b26a429a28522eb43bff91da21080c97e81edf6e6d9" exitCode=0 Feb 16 17:20:54 crc kubenswrapper[4870]: I0216 17:20:54.728241 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" event={"ID":"436d1b04-0d4b-4f16-9c58-b44635f2fc7e","Type":"ContainerDied","Data":"17c24b966727997ab44a7b26a429a28522eb43bff91da21080c97e81edf6e6d9"} Feb 16 17:20:54 crc kubenswrapper[4870]: I0216 17:20:54.740261 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-7qf8f" podStartSLOduration=12.740237841999999 podStartE2EDuration="12.740237842s" podCreationTimestamp="2026-02-16 17:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:20:54.73491356 +0000 UTC m=+1259.218377944" watchObservedRunningTime="2026-02-16 17:20:54.740237842 +0000 UTC m=+1259.223702226" Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.504925 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.596930 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-ovsdbserver-nb\") pod \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.597053 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-config\") pod \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.597127 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnps9\" (UniqueName: \"kubernetes.io/projected/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-kube-api-access-xnps9\") pod \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.597155 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-dns-swift-storage-0\") pod \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.597193 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-dns-svc\") pod \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.597266 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-ovsdbserver-sb\") pod \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\" (UID: \"436d1b04-0d4b-4f16-9c58-b44635f2fc7e\") " Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.626307 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-kube-api-access-xnps9" (OuterVolumeSpecName: "kube-api-access-xnps9") pod "436d1b04-0d4b-4f16-9c58-b44635f2fc7e" (UID: "436d1b04-0d4b-4f16-9c58-b44635f2fc7e"). InnerVolumeSpecName "kube-api-access-xnps9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.688937 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "436d1b04-0d4b-4f16-9c58-b44635f2fc7e" (UID: "436d1b04-0d4b-4f16-9c58-b44635f2fc7e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.695878 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "436d1b04-0d4b-4f16-9c58-b44635f2fc7e" (UID: "436d1b04-0d4b-4f16-9c58-b44635f2fc7e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.700642 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-config" (OuterVolumeSpecName: "config") pod "436d1b04-0d4b-4f16-9c58-b44635f2fc7e" (UID: "436d1b04-0d4b-4f16-9c58-b44635f2fc7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.700916 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.700972 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.700988 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnps9\" (UniqueName: \"kubernetes.io/projected/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-kube-api-access-xnps9\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.701002 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.705003 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "436d1b04-0d4b-4f16-9c58-b44635f2fc7e" (UID: "436d1b04-0d4b-4f16-9c58-b44635f2fc7e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.714615 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "436d1b04-0d4b-4f16-9c58-b44635f2fc7e" (UID: "436d1b04-0d4b-4f16-9c58-b44635f2fc7e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.762910 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2b447ca3-eb84-4df0-98c5-b825a71e47bb","Type":"ContainerStarted","Data":"f9d8075c2b8813c7b2db6589e2514624fa66e83a4396e3bc337fcc88e102c539"} Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.771884 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" event={"ID":"436d1b04-0d4b-4f16-9c58-b44635f2fc7e","Type":"ContainerDied","Data":"137aebdfcf6feead379443531e5934e4477a876b6acc2a538388608e25cbbe9c"} Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.771934 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-dwgsh" Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.772083 4870 scope.go:117] "RemoveContainer" containerID="17c24b966727997ab44a7b26a429a28522eb43bff91da21080c97e81edf6e6d9" Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.774744 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-686cd77f6d-7xrcx" event={"ID":"0de98d85-29b8-44b5-b120-72c0c42e4714","Type":"ContainerStarted","Data":"dc694537ffbad444d7679dfda4dd89d7882b89b56e2c3de4fd663fbd4021d6cd"} Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.781863 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"90a811c8-a3c8-4e80-8a63-da92b6fd8c15","Type":"ContainerStarted","Data":"381500c2c309b2063666c7fe9f14d88b4cfda360142a551ed25d92d110e0bc9a"} Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.787352 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-9kr25" event={"ID":"b8e4be99-05cc-436e-9634-b6302dc49fa5","Type":"ContainerStarted","Data":"e31bce6764ed30ed827af4bd809dcccff0614cace4afe5ae05286628706fe31c"} Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.807916 4870 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.807985 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/436d1b04-0d4b-4f16-9c58-b44635f2fc7e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.952032 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-dwgsh"] Feb 16 17:20:55 crc kubenswrapper[4870]: I0216 17:20:55.980934 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-dwgsh"] Feb 16 17:20:56 crc kubenswrapper[4870]: E0216 17:20:56.231242 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:20:56 crc kubenswrapper[4870]: I0216 17:20:56.261643 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="436d1b04-0d4b-4f16-9c58-b44635f2fc7e" path="/var/lib/kubelet/pods/436d1b04-0d4b-4f16-9c58-b44635f2fc7e/volumes" Feb 16 17:20:56 crc kubenswrapper[4870]: I0216 17:20:56.362509 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:20:56 crc kubenswrapper[4870]: I0216 17:20:56.432697 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:20:56 crc kubenswrapper[4870]: I0216 17:20:56.798141 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"90a811c8-a3c8-4e80-8a63-da92b6fd8c15","Type":"ContainerStarted","Data":"c277d28bdab5e9be86875e354b38d5606ff6301e6ed8c49ab2167ebdd9ee1d43"} Feb 16 17:20:56 crc kubenswrapper[4870]: I0216 17:20:56.800290 4870 generic.go:334] "Generic (PLEG): container finished" podID="b8e4be99-05cc-436e-9634-b6302dc49fa5" containerID="e31bce6764ed30ed827af4bd809dcccff0614cace4afe5ae05286628706fe31c" exitCode=0 Feb 16 17:20:56 crc kubenswrapper[4870]: I0216 17:20:56.800411 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-9kr25" event={"ID":"b8e4be99-05cc-436e-9634-b6302dc49fa5","Type":"ContainerDied","Data":"e31bce6764ed30ed827af4bd809dcccff0614cace4afe5ae05286628706fe31c"} Feb 16 17:20:56 crc kubenswrapper[4870]: I0216 17:20:56.802421 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2b447ca3-eb84-4df0-98c5-b825a71e47bb","Type":"ContainerStarted","Data":"6d62600f6e4149101e592a6732bc688b5bb35b1f5b8b2b1f40fa3cfaa145ac59"} Feb 16 17:20:56 crc kubenswrapper[4870]: I0216 17:20:56.806408 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38836e81-1b99-4b50-ada2-40727db1f248","Type":"ContainerStarted","Data":"30d9110ca712c5906cf63cfc54c4cbba0cc83abca2a5553851f9284e795acfb0"} Feb 16 17:20:56 crc kubenswrapper[4870]: I0216 17:20:56.813836 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-686cd77f6d-7xrcx" event={"ID":"0de98d85-29b8-44b5-b120-72c0c42e4714","Type":"ContainerStarted","Data":"5dd6804a265c7e338bf704f744968ed596cdcd5ce460d8721e17916ef0e10370"} Feb 16 17:20:56 crc kubenswrapper[4870]: I0216 17:20:56.814275 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-686cd77f6d-7xrcx" Feb 16 17:20:56 crc kubenswrapper[4870]: I0216 17:20:56.871198 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-686cd77f6d-7xrcx" podStartSLOduration=5.871179491 podStartE2EDuration="5.871179491s" podCreationTimestamp="2026-02-16 17:20:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:20:56.862143903 +0000 UTC m=+1261.345608307" watchObservedRunningTime="2026-02-16 17:20:56.871179491 +0000 UTC m=+1261.354643875" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.728310 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-85848b6785-nrbf6"] Feb 16 17:20:57 crc kubenswrapper[4870]: E0216 17:20:57.729841 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="436d1b04-0d4b-4f16-9c58-b44635f2fc7e" containerName="init" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.729931 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="436d1b04-0d4b-4f16-9c58-b44635f2fc7e" containerName="init" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.730262 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="436d1b04-0d4b-4f16-9c58-b44635f2fc7e" containerName="init" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.731974 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.736238 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.736465 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.764211 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85848b6785-nrbf6"] Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.876459 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-internal-tls-certs\") pod \"neutron-85848b6785-nrbf6\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.876575 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-ovndb-tls-certs\") pod \"neutron-85848b6785-nrbf6\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.876710 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-combined-ca-bundle\") pod \"neutron-85848b6785-nrbf6\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.876890 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-public-tls-certs\") pod \"neutron-85848b6785-nrbf6\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.876973 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-httpd-config\") pod \"neutron-85848b6785-nrbf6\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.877156 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rts4r\" (UniqueName: \"kubernetes.io/projected/b846df4e-a215-42b4-a15d-08eea2d03652-kube-api-access-rts4r\") pod \"neutron-85848b6785-nrbf6\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.877193 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-config\") pod \"neutron-85848b6785-nrbf6\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.979179 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-httpd-config\") pod \"neutron-85848b6785-nrbf6\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.979324 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rts4r\" (UniqueName: \"kubernetes.io/projected/b846df4e-a215-42b4-a15d-08eea2d03652-kube-api-access-rts4r\") pod \"neutron-85848b6785-nrbf6\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.979356 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-config\") pod \"neutron-85848b6785-nrbf6\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.979409 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-internal-tls-certs\") pod \"neutron-85848b6785-nrbf6\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.979455 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-ovndb-tls-certs\") pod \"neutron-85848b6785-nrbf6\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.980122 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-combined-ca-bundle\") pod \"neutron-85848b6785-nrbf6\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.980300 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-public-tls-certs\") pod \"neutron-85848b6785-nrbf6\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.984883 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-httpd-config\") pod \"neutron-85848b6785-nrbf6\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.986737 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-ovndb-tls-certs\") pod \"neutron-85848b6785-nrbf6\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.987734 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-combined-ca-bundle\") pod \"neutron-85848b6785-nrbf6\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.989110 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-public-tls-certs\") pod \"neutron-85848b6785-nrbf6\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.991567 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-internal-tls-certs\") pod \"neutron-85848b6785-nrbf6\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.994458 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-config\") pod \"neutron-85848b6785-nrbf6\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:57 crc kubenswrapper[4870]: I0216 17:20:57.998345 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rts4r\" (UniqueName: \"kubernetes.io/projected/b846df4e-a215-42b4-a15d-08eea2d03652-kube-api-access-rts4r\") pod \"neutron-85848b6785-nrbf6\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:58 crc kubenswrapper[4870]: I0216 17:20:58.066408 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:20:58 crc kubenswrapper[4870]: I0216 17:20:58.693085 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85848b6785-nrbf6"] Feb 16 17:20:58 crc kubenswrapper[4870]: I0216 17:20:58.879639 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"90a811c8-a3c8-4e80-8a63-da92b6fd8c15","Type":"ContainerStarted","Data":"eb562f1bc0275574b01d8b7d79ee373c5eb04b791ca01f977a788e418b69a228"} Feb 16 17:20:58 crc kubenswrapper[4870]: I0216 17:20:58.880492 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="90a811c8-a3c8-4e80-8a63-da92b6fd8c15" containerName="glance-httpd" containerID="cri-o://eb562f1bc0275574b01d8b7d79ee373c5eb04b791ca01f977a788e418b69a228" gracePeriod=30 Feb 16 17:20:58 crc kubenswrapper[4870]: I0216 17:20:58.881195 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="90a811c8-a3c8-4e80-8a63-da92b6fd8c15" containerName="glance-log" containerID="cri-o://c277d28bdab5e9be86875e354b38d5606ff6301e6ed8c49ab2167ebdd9ee1d43" gracePeriod=30 Feb 16 17:20:58 crc kubenswrapper[4870]: I0216 17:20:58.898939 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-9kr25" event={"ID":"b8e4be99-05cc-436e-9634-b6302dc49fa5","Type":"ContainerStarted","Data":"2debca4bf0415947ae6bcc801479db9bfed32100a26e90a6a83ee654f99cc8a2"} Feb 16 17:20:58 crc kubenswrapper[4870]: I0216 17:20:58.899734 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:20:58 crc kubenswrapper[4870]: I0216 17:20:58.910566 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85848b6785-nrbf6" event={"ID":"b846df4e-a215-42b4-a15d-08eea2d03652","Type":"ContainerStarted","Data":"7472706d949b4d1b14cd2bd9acdca1eebb350c6c417774d20b49b4e0cd24a9de"} Feb 16 17:20:58 crc kubenswrapper[4870]: I0216 17:20:58.912194 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.912168326 podStartE2EDuration="6.912168326s" podCreationTimestamp="2026-02-16 17:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:20:58.907489183 +0000 UTC m=+1263.390953587" watchObservedRunningTime="2026-02-16 17:20:58.912168326 +0000 UTC m=+1263.395632710" Feb 16 17:20:58 crc kubenswrapper[4870]: I0216 17:20:58.934192 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2b447ca3-eb84-4df0-98c5-b825a71e47bb","Type":"ContainerStarted","Data":"de4087913188a9a79da235973c130e35bf9efb69f04d508e96635d24012fbb15"} Feb 16 17:20:58 crc kubenswrapper[4870]: I0216 17:20:58.938218 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-9kr25" podStartSLOduration=7.938194919 podStartE2EDuration="7.938194919s" podCreationTimestamp="2026-02-16 17:20:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:20:58.934244376 +0000 UTC m=+1263.417708760" watchObservedRunningTime="2026-02-16 17:20:58.938194919 +0000 UTC m=+1263.421659303" Feb 16 17:20:59 crc kubenswrapper[4870]: I0216 17:20:59.946442 4870 generic.go:334] "Generic (PLEG): container finished" podID="90a811c8-a3c8-4e80-8a63-da92b6fd8c15" containerID="c277d28bdab5e9be86875e354b38d5606ff6301e6ed8c49ab2167ebdd9ee1d43" exitCode=143 Feb 16 17:20:59 crc kubenswrapper[4870]: I0216 17:20:59.947205 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"90a811c8-a3c8-4e80-8a63-da92b6fd8c15","Type":"ContainerDied","Data":"c277d28bdab5e9be86875e354b38d5606ff6301e6ed8c49ab2167ebdd9ee1d43"} Feb 16 17:20:59 crc kubenswrapper[4870]: I0216 17:20:59.950360 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2b447ca3-eb84-4df0-98c5-b825a71e47bb" containerName="glance-log" containerID="cri-o://6d62600f6e4149101e592a6732bc688b5bb35b1f5b8b2b1f40fa3cfaa145ac59" gracePeriod=30 Feb 16 17:20:59 crc kubenswrapper[4870]: I0216 17:20:59.950625 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85848b6785-nrbf6" event={"ID":"b846df4e-a215-42b4-a15d-08eea2d03652","Type":"ContainerStarted","Data":"e919b9cd5390cb83a421aa35ab78f2118fcebad544b96e741075fbd5400cce7d"} Feb 16 17:20:59 crc kubenswrapper[4870]: I0216 17:20:59.951844 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2b447ca3-eb84-4df0-98c5-b825a71e47bb" containerName="glance-httpd" containerID="cri-o://de4087913188a9a79da235973c130e35bf9efb69f04d508e96635d24012fbb15" gracePeriod=30 Feb 16 17:20:59 crc kubenswrapper[4870]: I0216 17:20:59.986376 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.986357994 podStartE2EDuration="8.986357994s" podCreationTimestamp="2026-02-16 17:20:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:20:59.97568264 +0000 UTC m=+1264.459147024" watchObservedRunningTime="2026-02-16 17:20:59.986357994 +0000 UTC m=+1264.469822368" Feb 16 17:21:00 crc kubenswrapper[4870]: I0216 17:21:00.965013 4870 generic.go:334] "Generic (PLEG): container finished" podID="90a811c8-a3c8-4e80-8a63-da92b6fd8c15" containerID="eb562f1bc0275574b01d8b7d79ee373c5eb04b791ca01f977a788e418b69a228" exitCode=0 Feb 16 17:21:00 crc kubenswrapper[4870]: I0216 17:21:00.965354 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"90a811c8-a3c8-4e80-8a63-da92b6fd8c15","Type":"ContainerDied","Data":"eb562f1bc0275574b01d8b7d79ee373c5eb04b791ca01f977a788e418b69a228"} Feb 16 17:21:00 crc kubenswrapper[4870]: I0216 17:21:00.970834 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85848b6785-nrbf6" event={"ID":"b846df4e-a215-42b4-a15d-08eea2d03652","Type":"ContainerStarted","Data":"1498d0e1e74cabb6b0bc8a4d251685ed6a76e0a95d68f461e07ce9f1bbdbbd70"} Feb 16 17:21:00 crc kubenswrapper[4870]: I0216 17:21:00.971005 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:21:00 crc kubenswrapper[4870]: I0216 17:21:00.975968 4870 generic.go:334] "Generic (PLEG): container finished" podID="2b447ca3-eb84-4df0-98c5-b825a71e47bb" containerID="de4087913188a9a79da235973c130e35bf9efb69f04d508e96635d24012fbb15" exitCode=0 Feb 16 17:21:00 crc kubenswrapper[4870]: I0216 17:21:00.976000 4870 generic.go:334] "Generic (PLEG): container finished" podID="2b447ca3-eb84-4df0-98c5-b825a71e47bb" containerID="6d62600f6e4149101e592a6732bc688b5bb35b1f5b8b2b1f40fa3cfaa145ac59" exitCode=143 Feb 16 17:21:00 crc kubenswrapper[4870]: I0216 17:21:00.976064 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2b447ca3-eb84-4df0-98c5-b825a71e47bb","Type":"ContainerDied","Data":"de4087913188a9a79da235973c130e35bf9efb69f04d508e96635d24012fbb15"} Feb 16 17:21:00 crc kubenswrapper[4870]: I0216 17:21:00.976090 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2b447ca3-eb84-4df0-98c5-b825a71e47bb","Type":"ContainerDied","Data":"6d62600f6e4149101e592a6732bc688b5bb35b1f5b8b2b1f40fa3cfaa145ac59"} Feb 16 17:21:00 crc kubenswrapper[4870]: I0216 17:21:00.979259 4870 generic.go:334] "Generic (PLEG): container finished" podID="6c6489e4-d44c-4e7d-a451-620da210060e" containerID="9372cf886b3676a5f4ee950cb13b4255a9c9461c9d5001be205fec2bfb180c6f" exitCode=0 Feb 16 17:21:00 crc kubenswrapper[4870]: I0216 17:21:00.980118 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9n2tj" event={"ID":"6c6489e4-d44c-4e7d-a451-620da210060e","Type":"ContainerDied","Data":"9372cf886b3676a5f4ee950cb13b4255a9c9461c9d5001be205fec2bfb180c6f"} Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.014054 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-85848b6785-nrbf6" podStartSLOduration=4.014035826 podStartE2EDuration="4.014035826s" podCreationTimestamp="2026-02-16 17:20:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:00.994831568 +0000 UTC m=+1265.478295952" watchObservedRunningTime="2026-02-16 17:21:01.014035826 +0000 UTC m=+1265.497500210" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.164356 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.273307 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-combined-ca-bundle\") pod \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.273361 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55js5\" (UniqueName: \"kubernetes.io/projected/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-kube-api-access-55js5\") pod \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.273401 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-logs\") pod \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.273434 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-httpd-run\") pod \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.273564 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae\") pod \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.273618 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-scripts\") pod \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.273639 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-config-data\") pod \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\" (UID: \"90a811c8-a3c8-4e80-8a63-da92b6fd8c15\") " Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.278689 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "90a811c8-a3c8-4e80-8a63-da92b6fd8c15" (UID: "90a811c8-a3c8-4e80-8a63-da92b6fd8c15"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.278824 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-logs" (OuterVolumeSpecName: "logs") pod "90a811c8-a3c8-4e80-8a63-da92b6fd8c15" (UID: "90a811c8-a3c8-4e80-8a63-da92b6fd8c15"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.284880 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-kube-api-access-55js5" (OuterVolumeSpecName: "kube-api-access-55js5") pod "90a811c8-a3c8-4e80-8a63-da92b6fd8c15" (UID: "90a811c8-a3c8-4e80-8a63-da92b6fd8c15"). InnerVolumeSpecName "kube-api-access-55js5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.286319 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-scripts" (OuterVolumeSpecName: "scripts") pod "90a811c8-a3c8-4e80-8a63-da92b6fd8c15" (UID: "90a811c8-a3c8-4e80-8a63-da92b6fd8c15"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.318923 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae" (OuterVolumeSpecName: "glance") pod "90a811c8-a3c8-4e80-8a63-da92b6fd8c15" (UID: "90a811c8-a3c8-4e80-8a63-da92b6fd8c15"). InnerVolumeSpecName "pvc-00d7366f-6279-474d-83f9-372df7eb27ae". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.327225 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90a811c8-a3c8-4e80-8a63-da92b6fd8c15" (UID: "90a811c8-a3c8-4e80-8a63-da92b6fd8c15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.352269 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-config-data" (OuterVolumeSpecName: "config-data") pod "90a811c8-a3c8-4e80-8a63-da92b6fd8c15" (UID: "90a811c8-a3c8-4e80-8a63-da92b6fd8c15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.375638 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.375676 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.375690 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55js5\" (UniqueName: \"kubernetes.io/projected/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-kube-api-access-55js5\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.375702 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.375710 4870 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.375747 4870 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-00d7366f-6279-474d-83f9-372df7eb27ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae\") on node \"crc\" " Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.375758 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90a811c8-a3c8-4e80-8a63-da92b6fd8c15-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.399217 4870 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.399366 4870 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-00d7366f-6279-474d-83f9-372df7eb27ae" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae") on node "crc" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.469041 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.477351 4870 reconciler_common.go:293] "Volume detached for volume \"pvc-00d7366f-6279-474d-83f9-372df7eb27ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.579060 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b447ca3-eb84-4df0-98c5-b825a71e47bb-config-data\") pod \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.579133 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b447ca3-eb84-4df0-98c5-b825a71e47bb-scripts\") pod \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.579188 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b447ca3-eb84-4df0-98c5-b825a71e47bb-combined-ca-bundle\") pod \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.579334 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\") pod \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.579481 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcskn\" (UniqueName: \"kubernetes.io/projected/2b447ca3-eb84-4df0-98c5-b825a71e47bb-kube-api-access-wcskn\") pod \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.579523 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2b447ca3-eb84-4df0-98c5-b825a71e47bb-httpd-run\") pod \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.579609 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b447ca3-eb84-4df0-98c5-b825a71e47bb-logs\") pod \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\" (UID: \"2b447ca3-eb84-4df0-98c5-b825a71e47bb\") " Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.580600 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b447ca3-eb84-4df0-98c5-b825a71e47bb-logs" (OuterVolumeSpecName: "logs") pod "2b447ca3-eb84-4df0-98c5-b825a71e47bb" (UID: "2b447ca3-eb84-4df0-98c5-b825a71e47bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.581552 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b447ca3-eb84-4df0-98c5-b825a71e47bb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2b447ca3-eb84-4df0-98c5-b825a71e47bb" (UID: "2b447ca3-eb84-4df0-98c5-b825a71e47bb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.595628 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b447ca3-eb84-4df0-98c5-b825a71e47bb-scripts" (OuterVolumeSpecName: "scripts") pod "2b447ca3-eb84-4df0-98c5-b825a71e47bb" (UID: "2b447ca3-eb84-4df0-98c5-b825a71e47bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.596178 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b447ca3-eb84-4df0-98c5-b825a71e47bb-kube-api-access-wcskn" (OuterVolumeSpecName: "kube-api-access-wcskn") pod "2b447ca3-eb84-4df0-98c5-b825a71e47bb" (UID: "2b447ca3-eb84-4df0-98c5-b825a71e47bb"). InnerVolumeSpecName "kube-api-access-wcskn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.600237 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae" (OuterVolumeSpecName: "glance") pod "2b447ca3-eb84-4df0-98c5-b825a71e47bb" (UID: "2b447ca3-eb84-4df0-98c5-b825a71e47bb"). InnerVolumeSpecName "pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.622818 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b447ca3-eb84-4df0-98c5-b825a71e47bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b447ca3-eb84-4df0-98c5-b825a71e47bb" (UID: "2b447ca3-eb84-4df0-98c5-b825a71e47bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.650527 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b447ca3-eb84-4df0-98c5-b825a71e47bb-config-data" (OuterVolumeSpecName: "config-data") pod "2b447ca3-eb84-4df0-98c5-b825a71e47bb" (UID: "2b447ca3-eb84-4df0-98c5-b825a71e47bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.681845 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcskn\" (UniqueName: \"kubernetes.io/projected/2b447ca3-eb84-4df0-98c5-b825a71e47bb-kube-api-access-wcskn\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.682113 4870 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2b447ca3-eb84-4df0-98c5-b825a71e47bb-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.682196 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b447ca3-eb84-4df0-98c5-b825a71e47bb-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.682268 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b447ca3-eb84-4df0-98c5-b825a71e47bb-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.682456 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b447ca3-eb84-4df0-98c5-b825a71e47bb-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.682777 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b447ca3-eb84-4df0-98c5-b825a71e47bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.682839 4870 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\") on node \"crc\" " Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.724245 4870 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.724476 4870 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae") on node "crc" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.785882 4870 reconciler_common.go:293] "Volume detached for volume \"pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.993245 4870 generic.go:334] "Generic (PLEG): container finished" podID="375ecf8f-1d93-40fb-85dc-c0eabcef46c3" containerID="957836a0ae9aac98f89a454b9eaf0bbd4596d3ba03ecffa8d4c32a70e7df8d08" exitCode=0 Feb 16 17:21:01 crc kubenswrapper[4870]: I0216 17:21:01.993321 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s4xns" event={"ID":"375ecf8f-1d93-40fb-85dc-c0eabcef46c3","Type":"ContainerDied","Data":"957836a0ae9aac98f89a454b9eaf0bbd4596d3ba03ecffa8d4c32a70e7df8d08"} Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.004612 4870 generic.go:334] "Generic (PLEG): container finished" podID="e04460b7-407f-4474-bf99-264869cf6529" containerID="9458b2d325f9566563ba2c68b0534d1b1d5b17072d854677ad3dea909e2e35f2" exitCode=0 Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.004674 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7qf8f" event={"ID":"e04460b7-407f-4474-bf99-264869cf6529","Type":"ContainerDied","Data":"9458b2d325f9566563ba2c68b0534d1b1d5b17072d854677ad3dea909e2e35f2"} Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.010217 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"90a811c8-a3c8-4e80-8a63-da92b6fd8c15","Type":"ContainerDied","Data":"381500c2c309b2063666c7fe9f14d88b4cfda360142a551ed25d92d110e0bc9a"} Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.010278 4870 scope.go:117] "RemoveContainer" containerID="eb562f1bc0275574b01d8b7d79ee373c5eb04b791ca01f977a788e418b69a228" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.010376 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.031223 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.031273 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2b447ca3-eb84-4df0-98c5-b825a71e47bb","Type":"ContainerDied","Data":"f9d8075c2b8813c7b2db6589e2514624fa66e83a4396e3bc337fcc88e102c539"} Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.088871 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.108018 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.121028 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.132300 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.139550 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:21:02 crc kubenswrapper[4870]: E0216 17:21:02.140033 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b447ca3-eb84-4df0-98c5-b825a71e47bb" containerName="glance-log" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.140047 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b447ca3-eb84-4df0-98c5-b825a71e47bb" containerName="glance-log" Feb 16 17:21:02 crc kubenswrapper[4870]: E0216 17:21:02.140060 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b447ca3-eb84-4df0-98c5-b825a71e47bb" containerName="glance-httpd" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.140068 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b447ca3-eb84-4df0-98c5-b825a71e47bb" containerName="glance-httpd" Feb 16 17:21:02 crc kubenswrapper[4870]: E0216 17:21:02.140082 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a811c8-a3c8-4e80-8a63-da92b6fd8c15" containerName="glance-httpd" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.140090 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a811c8-a3c8-4e80-8a63-da92b6fd8c15" containerName="glance-httpd" Feb 16 17:21:02 crc kubenswrapper[4870]: E0216 17:21:02.140111 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a811c8-a3c8-4e80-8a63-da92b6fd8c15" containerName="glance-log" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.140117 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a811c8-a3c8-4e80-8a63-da92b6fd8c15" containerName="glance-log" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.140311 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="90a811c8-a3c8-4e80-8a63-da92b6fd8c15" containerName="glance-log" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.140325 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b447ca3-eb84-4df0-98c5-b825a71e47bb" containerName="glance-log" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.140339 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="90a811c8-a3c8-4e80-8a63-da92b6fd8c15" containerName="glance-httpd" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.140347 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b447ca3-eb84-4df0-98c5-b825a71e47bb" containerName="glance-httpd" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.141543 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.145593 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wpt8f" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.145600 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.145671 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.145929 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.157172 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.169034 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.171814 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.177037 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.177318 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.178047 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.246828 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b447ca3-eb84-4df0-98c5-b825a71e47bb" path="/var/lib/kubelet/pods/2b447ca3-eb84-4df0-98c5-b825a71e47bb/volumes" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.252574 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90a811c8-a3c8-4e80-8a63-da92b6fd8c15" path="/var/lib/kubelet/pods/90a811c8-a3c8-4e80-8a63-da92b6fd8c15/volumes" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.296268 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn4zz\" (UniqueName: \"kubernetes.io/projected/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-kube-api-access-sn4zz\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.296416 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.296466 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.296493 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-config-data\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.296515 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.296540 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5nhj\" (UniqueName: \"kubernetes.io/projected/00a8e7ab-716d-408e-a531-c49194dca35c-kube-api-access-g5nhj\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.296560 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.296578 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-scripts\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.296601 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.296637 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.296667 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-logs\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.296692 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00a8e7ab-716d-408e-a531-c49194dca35c-logs\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.296730 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.296765 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-00d7366f-6279-474d-83f9-372df7eb27ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.296805 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.296889 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00a8e7ab-716d-408e-a531-c49194dca35c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.398188 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.398255 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-00d7366f-6279-474d-83f9-372df7eb27ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.398301 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.398339 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00a8e7ab-716d-408e-a531-c49194dca35c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.398404 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn4zz\" (UniqueName: \"kubernetes.io/projected/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-kube-api-access-sn4zz\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.398430 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.398464 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.398490 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-config-data\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.398516 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.398533 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5nhj\" (UniqueName: \"kubernetes.io/projected/00a8e7ab-716d-408e-a531-c49194dca35c-kube-api-access-g5nhj\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.398552 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.398578 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-scripts\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.398598 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.399490 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.401002 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.401072 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-logs\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.401105 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00a8e7ab-716d-408e-a531-c49194dca35c-logs\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.401400 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00a8e7ab-716d-408e-a531-c49194dca35c-logs\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.403085 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00a8e7ab-716d-408e-a531-c49194dca35c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.404521 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-logs\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.405918 4870 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.405967 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a6e7916025a705e37bf42169fdb7099d11d639aaccb3a9ce061702c007eb46f2/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.408589 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.408669 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-scripts\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.409568 4870 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.409600 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-00d7366f-6279-474d-83f9-372df7eb27ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3ef4ca648708bfc498381351edc504adf309a14cbb60cff0c6075d7e8a16e973/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.413204 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.413679 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.415007 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.416739 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-config-data\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.419055 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.419601 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.422984 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn4zz\" (UniqueName: \"kubernetes.io/projected/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-kube-api-access-sn4zz\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.435581 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5nhj\" (UniqueName: \"kubernetes.io/projected/00a8e7ab-716d-408e-a531-c49194dca35c-kube-api-access-g5nhj\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.459127 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-00d7366f-6279-474d-83f9-372df7eb27ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae\") pod \"glance-default-internal-api-0\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.496992 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\") pod \"glance-default-external-api-0\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " pod="openstack/glance-default-external-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.505749 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 17:21:02 crc kubenswrapper[4870]: I0216 17:21:02.513343 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 17:21:05 crc kubenswrapper[4870]: I0216 17:21:05.366671 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:21:05 crc kubenswrapper[4870]: I0216 17:21:05.367310 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.281163 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.338049 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-zgtq8"] Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.339634 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" podUID="6bcafa21-00bc-4d37-9294-c3f378c43012" containerName="dnsmasq-dns" containerID="cri-o://5d3761c76b23160a41ff645851e1bb97d3018801d6cc6692ea9284298ca7cf52" gracePeriod=10 Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.414736 4870 scope.go:117] "RemoveContainer" containerID="c277d28bdab5e9be86875e354b38d5606ff6301e6ed8c49ab2167ebdd9ee1d43" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.535801 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" podUID="6bcafa21-00bc-4d37-9294-c3f378c43012" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: connect: connection refused" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.617849 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.627642 4870 scope.go:117] "RemoveContainer" containerID="de4087913188a9a79da235973c130e35bf9efb69f04d508e96635d24012fbb15" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.686172 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s4xns" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.699181 4870 scope.go:117] "RemoveContainer" containerID="6d62600f6e4149101e592a6732bc688b5bb35b1f5b8b2b1f40fa3cfaa145ac59" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.715720 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-combined-ca-bundle\") pod \"e04460b7-407f-4474-bf99-264869cf6529\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.716083 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-fernet-keys\") pod \"e04460b7-407f-4474-bf99-264869cf6529\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.716123 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-config-data\") pod \"e04460b7-407f-4474-bf99-264869cf6529\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.716212 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-scripts\") pod \"e04460b7-407f-4474-bf99-264869cf6529\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.716256 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnzgb\" (UniqueName: \"kubernetes.io/projected/e04460b7-407f-4474-bf99-264869cf6529-kube-api-access-gnzgb\") pod \"e04460b7-407f-4474-bf99-264869cf6529\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.716326 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-credential-keys\") pod \"e04460b7-407f-4474-bf99-264869cf6529\" (UID: \"e04460b7-407f-4474-bf99-264869cf6529\") " Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.721993 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e04460b7-407f-4474-bf99-264869cf6529" (UID: "e04460b7-407f-4474-bf99-264869cf6529"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.726574 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9n2tj" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.727079 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-scripts" (OuterVolumeSpecName: "scripts") pod "e04460b7-407f-4474-bf99-264869cf6529" (UID: "e04460b7-407f-4474-bf99-264869cf6529"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.727725 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e04460b7-407f-4474-bf99-264869cf6529-kube-api-access-gnzgb" (OuterVolumeSpecName: "kube-api-access-gnzgb") pod "e04460b7-407f-4474-bf99-264869cf6529" (UID: "e04460b7-407f-4474-bf99-264869cf6529"). InnerVolumeSpecName "kube-api-access-gnzgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.736214 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e04460b7-407f-4474-bf99-264869cf6529" (UID: "e04460b7-407f-4474-bf99-264869cf6529"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.780988 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e04460b7-407f-4474-bf99-264869cf6529" (UID: "e04460b7-407f-4474-bf99-264869cf6529"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.784859 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-config-data" (OuterVolumeSpecName: "config-data") pod "e04460b7-407f-4474-bf99-264869cf6529" (UID: "e04460b7-407f-4474-bf99-264869cf6529"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.818165 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/375ecf8f-1d93-40fb-85dc-c0eabcef46c3-db-sync-config-data\") pod \"375ecf8f-1d93-40fb-85dc-c0eabcef46c3\" (UID: \"375ecf8f-1d93-40fb-85dc-c0eabcef46c3\") " Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.818252 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c6489e4-d44c-4e7d-a451-620da210060e-logs\") pod \"6c6489e4-d44c-4e7d-a451-620da210060e\" (UID: \"6c6489e4-d44c-4e7d-a451-620da210060e\") " Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.818298 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpgz9\" (UniqueName: \"kubernetes.io/projected/6c6489e4-d44c-4e7d-a451-620da210060e-kube-api-access-fpgz9\") pod \"6c6489e4-d44c-4e7d-a451-620da210060e\" (UID: \"6c6489e4-d44c-4e7d-a451-620da210060e\") " Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.818329 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6489e4-d44c-4e7d-a451-620da210060e-combined-ca-bundle\") pod \"6c6489e4-d44c-4e7d-a451-620da210060e\" (UID: \"6c6489e4-d44c-4e7d-a451-620da210060e\") " Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.818371 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpznb\" (UniqueName: \"kubernetes.io/projected/375ecf8f-1d93-40fb-85dc-c0eabcef46c3-kube-api-access-zpznb\") pod \"375ecf8f-1d93-40fb-85dc-c0eabcef46c3\" (UID: \"375ecf8f-1d93-40fb-85dc-c0eabcef46c3\") " Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.818481 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/375ecf8f-1d93-40fb-85dc-c0eabcef46c3-combined-ca-bundle\") pod \"375ecf8f-1d93-40fb-85dc-c0eabcef46c3\" (UID: \"375ecf8f-1d93-40fb-85dc-c0eabcef46c3\") " Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.818521 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6489e4-d44c-4e7d-a451-620da210060e-config-data\") pod \"6c6489e4-d44c-4e7d-a451-620da210060e\" (UID: \"6c6489e4-d44c-4e7d-a451-620da210060e\") " Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.818596 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c6489e4-d44c-4e7d-a451-620da210060e-scripts\") pod \"6c6489e4-d44c-4e7d-a451-620da210060e\" (UID: \"6c6489e4-d44c-4e7d-a451-620da210060e\") " Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.818653 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c6489e4-d44c-4e7d-a451-620da210060e-logs" (OuterVolumeSpecName: "logs") pod "6c6489e4-d44c-4e7d-a451-620da210060e" (UID: "6c6489e4-d44c-4e7d-a451-620da210060e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.819149 4870 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.819167 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c6489e4-d44c-4e7d-a451-620da210060e-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.819179 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.819191 4870 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.819203 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.819214 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e04460b7-407f-4474-bf99-264869cf6529-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.819227 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnzgb\" (UniqueName: \"kubernetes.io/projected/e04460b7-407f-4474-bf99-264869cf6529-kube-api-access-gnzgb\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.821674 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c6489e4-d44c-4e7d-a451-620da210060e-kube-api-access-fpgz9" (OuterVolumeSpecName: "kube-api-access-fpgz9") pod "6c6489e4-d44c-4e7d-a451-620da210060e" (UID: "6c6489e4-d44c-4e7d-a451-620da210060e"). InnerVolumeSpecName "kube-api-access-fpgz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.827364 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/375ecf8f-1d93-40fb-85dc-c0eabcef46c3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "375ecf8f-1d93-40fb-85dc-c0eabcef46c3" (UID: "375ecf8f-1d93-40fb-85dc-c0eabcef46c3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.827402 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/375ecf8f-1d93-40fb-85dc-c0eabcef46c3-kube-api-access-zpznb" (OuterVolumeSpecName: "kube-api-access-zpznb") pod "375ecf8f-1d93-40fb-85dc-c0eabcef46c3" (UID: "375ecf8f-1d93-40fb-85dc-c0eabcef46c3"). InnerVolumeSpecName "kube-api-access-zpznb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.829899 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c6489e4-d44c-4e7d-a451-620da210060e-scripts" (OuterVolumeSpecName: "scripts") pod "6c6489e4-d44c-4e7d-a451-620da210060e" (UID: "6c6489e4-d44c-4e7d-a451-620da210060e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.842791 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c6489e4-d44c-4e7d-a451-620da210060e-config-data" (OuterVolumeSpecName: "config-data") pod "6c6489e4-d44c-4e7d-a451-620da210060e" (UID: "6c6489e4-d44c-4e7d-a451-620da210060e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.844415 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/375ecf8f-1d93-40fb-85dc-c0eabcef46c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "375ecf8f-1d93-40fb-85dc-c0eabcef46c3" (UID: "375ecf8f-1d93-40fb-85dc-c0eabcef46c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.853680 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c6489e4-d44c-4e7d-a451-620da210060e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c6489e4-d44c-4e7d-a451-620da210060e" (UID: "6c6489e4-d44c-4e7d-a451-620da210060e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.891372 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.922936 4870 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/375ecf8f-1d93-40fb-85dc-c0eabcef46c3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.922982 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpgz9\" (UniqueName: \"kubernetes.io/projected/6c6489e4-d44c-4e7d-a451-620da210060e-kube-api-access-fpgz9\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.922994 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6489e4-d44c-4e7d-a451-620da210060e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.923005 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpznb\" (UniqueName: \"kubernetes.io/projected/375ecf8f-1d93-40fb-85dc-c0eabcef46c3-kube-api-access-zpznb\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.923014 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/375ecf8f-1d93-40fb-85dc-c0eabcef46c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.923022 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6489e4-d44c-4e7d-a451-620da210060e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:07 crc kubenswrapper[4870]: I0216 17:21:07.923031 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c6489e4-d44c-4e7d-a451-620da210060e-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.026452 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-dns-svc\") pod \"6bcafa21-00bc-4d37-9294-c3f378c43012\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.026935 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnr24\" (UniqueName: \"kubernetes.io/projected/6bcafa21-00bc-4d37-9294-c3f378c43012-kube-api-access-tnr24\") pod \"6bcafa21-00bc-4d37-9294-c3f378c43012\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.026981 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-dns-swift-storage-0\") pod \"6bcafa21-00bc-4d37-9294-c3f378c43012\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.027080 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-ovsdbserver-sb\") pod \"6bcafa21-00bc-4d37-9294-c3f378c43012\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.027201 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-ovsdbserver-nb\") pod \"6bcafa21-00bc-4d37-9294-c3f378c43012\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.027388 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-config\") pod \"6bcafa21-00bc-4d37-9294-c3f378c43012\" (UID: \"6bcafa21-00bc-4d37-9294-c3f378c43012\") " Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.031441 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bcafa21-00bc-4d37-9294-c3f378c43012-kube-api-access-tnr24" (OuterVolumeSpecName: "kube-api-access-tnr24") pod "6bcafa21-00bc-4d37-9294-c3f378c43012" (UID: "6bcafa21-00bc-4d37-9294-c3f378c43012"). InnerVolumeSpecName "kube-api-access-tnr24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.106562 4870 generic.go:334] "Generic (PLEG): container finished" podID="6bcafa21-00bc-4d37-9294-c3f378c43012" containerID="5d3761c76b23160a41ff645851e1bb97d3018801d6cc6692ea9284298ca7cf52" exitCode=0 Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.106647 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" event={"ID":"6bcafa21-00bc-4d37-9294-c3f378c43012","Type":"ContainerDied","Data":"5d3761c76b23160a41ff645851e1bb97d3018801d6cc6692ea9284298ca7cf52"} Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.106680 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" event={"ID":"6bcafa21-00bc-4d37-9294-c3f378c43012","Type":"ContainerDied","Data":"d08c720a8368340a97f4b1d18091a9aa9f1200eccbaeff851f5a751179c9e079"} Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.106685 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-zgtq8" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.106702 4870 scope.go:117] "RemoveContainer" containerID="5d3761c76b23160a41ff645851e1bb97d3018801d6cc6692ea9284298ca7cf52" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.111744 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38836e81-1b99-4b50-ada2-40727db1f248","Type":"ContainerStarted","Data":"d02b2381bf8f2683de03b2ccdd3ce10b27ef7cdee07bd8ee3818ba5a1749d450"} Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.113594 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7qf8f" event={"ID":"e04460b7-407f-4474-bf99-264869cf6529","Type":"ContainerDied","Data":"7368b7325511162f9b1cf53b1f692d9fd6110aefb17d192c14b28ca4d39ed2b4"} Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.113622 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7368b7325511162f9b1cf53b1f692d9fd6110aefb17d192c14b28ca4d39ed2b4" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.113691 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7qf8f" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.122246 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9n2tj" event={"ID":"6c6489e4-d44c-4e7d-a451-620da210060e","Type":"ContainerDied","Data":"91abbd2ce6f65a5e2935c0c4fceb6d5d54cee6e04893c05a208a6f214fdc47eb"} Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.122284 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91abbd2ce6f65a5e2935c0c4fceb6d5d54cee6e04893c05a208a6f214fdc47eb" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.122334 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9n2tj" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.122776 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6bcafa21-00bc-4d37-9294-c3f378c43012" (UID: "6bcafa21-00bc-4d37-9294-c3f378c43012"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.124219 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6bcafa21-00bc-4d37-9294-c3f378c43012" (UID: "6bcafa21-00bc-4d37-9294-c3f378c43012"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.126202 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s4xns" event={"ID":"375ecf8f-1d93-40fb-85dc-c0eabcef46c3","Type":"ContainerDied","Data":"a4ebaf774ff5fcd91c7e4267d8c6555edb91d24ca103cad17035ab7b89bf58e8"} Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.126235 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4ebaf774ff5fcd91c7e4267d8c6555edb91d24ca103cad17035ab7b89bf58e8" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.126289 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s4xns" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.131990 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.132022 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnr24\" (UniqueName: \"kubernetes.io/projected/6bcafa21-00bc-4d37-9294-c3f378c43012-kube-api-access-tnr24\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.132036 4870 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.132698 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6bcafa21-00bc-4d37-9294-c3f378c43012" (UID: "6bcafa21-00bc-4d37-9294-c3f378c43012"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.133509 4870 scope.go:117] "RemoveContainer" containerID="e567062079a537ea46752bd7a9d3395d5a041cb7eb9d23fc0e08f21d6e29934f" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.149619 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6bcafa21-00bc-4d37-9294-c3f378c43012" (UID: "6bcafa21-00bc-4d37-9294-c3f378c43012"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.154008 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-config" (OuterVolumeSpecName: "config") pod "6bcafa21-00bc-4d37-9294-c3f378c43012" (UID: "6bcafa21-00bc-4d37-9294-c3f378c43012"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.174185 4870 scope.go:117] "RemoveContainer" containerID="5d3761c76b23160a41ff645851e1bb97d3018801d6cc6692ea9284298ca7cf52" Feb 16 17:21:08 crc kubenswrapper[4870]: E0216 17:21:08.174995 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d3761c76b23160a41ff645851e1bb97d3018801d6cc6692ea9284298ca7cf52\": container with ID starting with 5d3761c76b23160a41ff645851e1bb97d3018801d6cc6692ea9284298ca7cf52 not found: ID does not exist" containerID="5d3761c76b23160a41ff645851e1bb97d3018801d6cc6692ea9284298ca7cf52" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.175039 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d3761c76b23160a41ff645851e1bb97d3018801d6cc6692ea9284298ca7cf52"} err="failed to get container status \"5d3761c76b23160a41ff645851e1bb97d3018801d6cc6692ea9284298ca7cf52\": rpc error: code = NotFound desc = could not find container \"5d3761c76b23160a41ff645851e1bb97d3018801d6cc6692ea9284298ca7cf52\": container with ID starting with 5d3761c76b23160a41ff645851e1bb97d3018801d6cc6692ea9284298ca7cf52 not found: ID does not exist" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.175070 4870 scope.go:117] "RemoveContainer" containerID="e567062079a537ea46752bd7a9d3395d5a041cb7eb9d23fc0e08f21d6e29934f" Feb 16 17:21:08 crc kubenswrapper[4870]: E0216 17:21:08.175434 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e567062079a537ea46752bd7a9d3395d5a041cb7eb9d23fc0e08f21d6e29934f\": container with ID starting with e567062079a537ea46752bd7a9d3395d5a041cb7eb9d23fc0e08f21d6e29934f not found: ID does not exist" containerID="e567062079a537ea46752bd7a9d3395d5a041cb7eb9d23fc0e08f21d6e29934f" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.175478 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e567062079a537ea46752bd7a9d3395d5a041cb7eb9d23fc0e08f21d6e29934f"} err="failed to get container status \"e567062079a537ea46752bd7a9d3395d5a041cb7eb9d23fc0e08f21d6e29934f\": rpc error: code = NotFound desc = could not find container \"e567062079a537ea46752bd7a9d3395d5a041cb7eb9d23fc0e08f21d6e29934f\": container with ID starting with e567062079a537ea46752bd7a9d3395d5a041cb7eb9d23fc0e08f21d6e29934f not found: ID does not exist" Feb 16 17:21:08 crc kubenswrapper[4870]: W0216 17:21:08.212980 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dccdc97_f78d_4a2e_9e18_4956fe9fc535.slice/crio-b3dd6b1e9d78d60643440240c934d9275fd87985c0188af0263b3580c0061a02 WatchSource:0}: Error finding container b3dd6b1e9d78d60643440240c934d9275fd87985c0188af0263b3580c0061a02: Status 404 returned error can't find the container with id b3dd6b1e9d78d60643440240c934d9275fd87985c0188af0263b3580c0061a02 Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.214778 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.235086 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.235151 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.235167 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bcafa21-00bc-4d37-9294-c3f378c43012-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.432386 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-zgtq8"] Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.440461 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-zgtq8"] Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.817894 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-568fd566f-ltx6b"] Feb 16 17:21:08 crc kubenswrapper[4870]: E0216 17:21:08.818693 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e04460b7-407f-4474-bf99-264869cf6529" containerName="keystone-bootstrap" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.818718 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e04460b7-407f-4474-bf99-264869cf6529" containerName="keystone-bootstrap" Feb 16 17:21:08 crc kubenswrapper[4870]: E0216 17:21:08.818743 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bcafa21-00bc-4d37-9294-c3f378c43012" containerName="init" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.818752 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bcafa21-00bc-4d37-9294-c3f378c43012" containerName="init" Feb 16 17:21:08 crc kubenswrapper[4870]: E0216 17:21:08.818796 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bcafa21-00bc-4d37-9294-c3f378c43012" containerName="dnsmasq-dns" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.818805 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bcafa21-00bc-4d37-9294-c3f378c43012" containerName="dnsmasq-dns" Feb 16 17:21:08 crc kubenswrapper[4870]: E0216 17:21:08.818821 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c6489e4-d44c-4e7d-a451-620da210060e" containerName="placement-db-sync" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.818829 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c6489e4-d44c-4e7d-a451-620da210060e" containerName="placement-db-sync" Feb 16 17:21:08 crc kubenswrapper[4870]: E0216 17:21:08.818846 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="375ecf8f-1d93-40fb-85dc-c0eabcef46c3" containerName="barbican-db-sync" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.818854 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="375ecf8f-1d93-40fb-85dc-c0eabcef46c3" containerName="barbican-db-sync" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.819096 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c6489e4-d44c-4e7d-a451-620da210060e" containerName="placement-db-sync" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.819124 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e04460b7-407f-4474-bf99-264869cf6529" containerName="keystone-bootstrap" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.819137 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="375ecf8f-1d93-40fb-85dc-c0eabcef46c3" containerName="barbican-db-sync" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.819149 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bcafa21-00bc-4d37-9294-c3f378c43012" containerName="dnsmasq-dns" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.820060 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.823775 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.824327 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.824541 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.824794 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.825033 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.825211 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pm4j6" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.836343 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-568fd566f-ltx6b"] Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.954422 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-89df85dc-88tt5"] Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.956910 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-89df85dc-88tt5" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.968584 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dghs\" (UniqueName: \"kubernetes.io/projected/25726b72-a54a-4482-8440-671195187a49-kube-api-access-6dghs\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.968639 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25726b72-a54a-4482-8440-671195187a49-credential-keys\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.968700 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25726b72-a54a-4482-8440-671195187a49-scripts\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.968735 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25726b72-a54a-4482-8440-671195187a49-public-tls-certs\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.968757 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25726b72-a54a-4482-8440-671195187a49-combined-ca-bundle\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.968774 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25726b72-a54a-4482-8440-671195187a49-fernet-keys\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.968789 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25726b72-a54a-4482-8440-671195187a49-internal-tls-certs\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:08 crc kubenswrapper[4870]: I0216 17:21:08.968805 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25726b72-a54a-4482-8440-671195187a49-config-data\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:08.995834 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:08.996083 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-h5smh" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:08.996208 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.000564 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-8754cf966-v85sw"] Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.002435 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8754cf966-v85sw" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.009426 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.045974 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-89df85dc-88tt5"] Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.073536 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dghs\" (UniqueName: \"kubernetes.io/projected/25726b72-a54a-4482-8440-671195187a49-kube-api-access-6dghs\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.073689 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25726b72-a54a-4482-8440-671195187a49-credential-keys\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.073930 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25726b72-a54a-4482-8440-671195187a49-scripts\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.074083 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25726b72-a54a-4482-8440-671195187a49-public-tls-certs\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.074371 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25726b72-a54a-4482-8440-671195187a49-combined-ca-bundle\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.074404 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhkl2\" (UniqueName: \"kubernetes.io/projected/887feada-bbae-4e0a-bb20-a1e29b65cef9-kube-api-access-nhkl2\") pod \"barbican-worker-89df85dc-88tt5\" (UID: \"887feada-bbae-4e0a-bb20-a1e29b65cef9\") " pod="openstack/barbican-worker-89df85dc-88tt5" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.074424 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25726b72-a54a-4482-8440-671195187a49-fernet-keys\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.074620 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25726b72-a54a-4482-8440-671195187a49-internal-tls-certs\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.074641 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25726b72-a54a-4482-8440-671195187a49-config-data\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.074933 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887feada-bbae-4e0a-bb20-a1e29b65cef9-combined-ca-bundle\") pod \"barbican-worker-89df85dc-88tt5\" (UID: \"887feada-bbae-4e0a-bb20-a1e29b65cef9\") " pod="openstack/barbican-worker-89df85dc-88tt5" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.075143 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/887feada-bbae-4e0a-bb20-a1e29b65cef9-logs\") pod \"barbican-worker-89df85dc-88tt5\" (UID: \"887feada-bbae-4e0a-bb20-a1e29b65cef9\") " pod="openstack/barbican-worker-89df85dc-88tt5" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.075172 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887feada-bbae-4e0a-bb20-a1e29b65cef9-config-data\") pod \"barbican-worker-89df85dc-88tt5\" (UID: \"887feada-bbae-4e0a-bb20-a1e29b65cef9\") " pod="openstack/barbican-worker-89df85dc-88tt5" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.075441 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/887feada-bbae-4e0a-bb20-a1e29b65cef9-config-data-custom\") pod \"barbican-worker-89df85dc-88tt5\" (UID: \"887feada-bbae-4e0a-bb20-a1e29b65cef9\") " pod="openstack/barbican-worker-89df85dc-88tt5" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.086577 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8754cf966-v85sw"] Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.096586 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25726b72-a54a-4482-8440-671195187a49-internal-tls-certs\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.104138 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25726b72-a54a-4482-8440-671195187a49-credential-keys\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.105808 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25726b72-a54a-4482-8440-671195187a49-config-data\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.106775 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25726b72-a54a-4482-8440-671195187a49-fernet-keys\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.107498 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25726b72-a54a-4482-8440-671195187a49-public-tls-certs\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.109345 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25726b72-a54a-4482-8440-671195187a49-combined-ca-bundle\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.115458 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25726b72-a54a-4482-8440-671195187a49-scripts\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.126431 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-798bd5bd64-st2b8"] Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.141768 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dghs\" (UniqueName: \"kubernetes.io/projected/25726b72-a54a-4482-8440-671195187a49-kube-api-access-6dghs\") pod \"keystone-568fd566f-ltx6b\" (UID: \"25726b72-a54a-4482-8440-671195187a49\") " pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.146825 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.177265 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.177529 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.177677 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.177846 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.178050 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4vpb8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.179158 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-config-data\") pod \"barbican-keystone-listener-8754cf966-v85sw\" (UID: \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\") " pod="openstack/barbican-keystone-listener-8754cf966-v85sw" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.179210 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-logs\") pod \"barbican-keystone-listener-8754cf966-v85sw\" (UID: \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\") " pod="openstack/barbican-keystone-listener-8754cf966-v85sw" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.179378 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-combined-ca-bundle\") pod \"barbican-keystone-listener-8754cf966-v85sw\" (UID: \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\") " pod="openstack/barbican-keystone-listener-8754cf966-v85sw" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.179428 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhkl2\" (UniqueName: \"kubernetes.io/projected/887feada-bbae-4e0a-bb20-a1e29b65cef9-kube-api-access-nhkl2\") pod \"barbican-worker-89df85dc-88tt5\" (UID: \"887feada-bbae-4e0a-bb20-a1e29b65cef9\") " pod="openstack/barbican-worker-89df85dc-88tt5" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.182078 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf52h\" (UniqueName: \"kubernetes.io/projected/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-kube-api-access-zf52h\") pod \"barbican-keystone-listener-8754cf966-v85sw\" (UID: \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\") " pod="openstack/barbican-keystone-listener-8754cf966-v85sw" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.182209 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-config-data-custom\") pod \"barbican-keystone-listener-8754cf966-v85sw\" (UID: \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\") " pod="openstack/barbican-keystone-listener-8754cf966-v85sw" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.182313 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887feada-bbae-4e0a-bb20-a1e29b65cef9-combined-ca-bundle\") pod \"barbican-worker-89df85dc-88tt5\" (UID: \"887feada-bbae-4e0a-bb20-a1e29b65cef9\") " pod="openstack/barbican-worker-89df85dc-88tt5" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.182387 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/887feada-bbae-4e0a-bb20-a1e29b65cef9-logs\") pod \"barbican-worker-89df85dc-88tt5\" (UID: \"887feada-bbae-4e0a-bb20-a1e29b65cef9\") " pod="openstack/barbican-worker-89df85dc-88tt5" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.182428 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887feada-bbae-4e0a-bb20-a1e29b65cef9-config-data\") pod \"barbican-worker-89df85dc-88tt5\" (UID: \"887feada-bbae-4e0a-bb20-a1e29b65cef9\") " pod="openstack/barbican-worker-89df85dc-88tt5" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.182468 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/887feada-bbae-4e0a-bb20-a1e29b65cef9-config-data-custom\") pod \"barbican-worker-89df85dc-88tt5\" (UID: \"887feada-bbae-4e0a-bb20-a1e29b65cef9\") " pod="openstack/barbican-worker-89df85dc-88tt5" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.183564 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/887feada-bbae-4e0a-bb20-a1e29b65cef9-logs\") pod \"barbican-worker-89df85dc-88tt5\" (UID: \"887feada-bbae-4e0a-bb20-a1e29b65cef9\") " pod="openstack/barbican-worker-89df85dc-88tt5" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.193889 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/887feada-bbae-4e0a-bb20-a1e29b65cef9-config-data-custom\") pod \"barbican-worker-89df85dc-88tt5\" (UID: \"887feada-bbae-4e0a-bb20-a1e29b65cef9\") " pod="openstack/barbican-worker-89df85dc-88tt5" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.195129 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887feada-bbae-4e0a-bb20-a1e29b65cef9-combined-ca-bundle\") pod \"barbican-worker-89df85dc-88tt5\" (UID: \"887feada-bbae-4e0a-bb20-a1e29b65cef9\") " pod="openstack/barbican-worker-89df85dc-88tt5" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.206998 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-798bd5bd64-st2b8"] Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.210619 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887feada-bbae-4e0a-bb20-a1e29b65cef9-config-data\") pod \"barbican-worker-89df85dc-88tt5\" (UID: \"887feada-bbae-4e0a-bb20-a1e29b65cef9\") " pod="openstack/barbican-worker-89df85dc-88tt5" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.231065 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhkl2\" (UniqueName: \"kubernetes.io/projected/887feada-bbae-4e0a-bb20-a1e29b65cef9-kube-api-access-nhkl2\") pod \"barbican-worker-89df85dc-88tt5\" (UID: \"887feada-bbae-4e0a-bb20-a1e29b65cef9\") " pod="openstack/barbican-worker-89df85dc-88tt5" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.231500 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.231851 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5dccdc97-f78d-4a2e-9e18-4956fe9fc535","Type":"ContainerStarted","Data":"174b76b80058494b81583b695257ebdd50654383978093ce2c861c57bc68f5cb"} Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.231887 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5dccdc97-f78d-4a2e-9e18-4956fe9fc535","Type":"ContainerStarted","Data":"b3dd6b1e9d78d60643440240c934d9275fd87985c0188af0263b3580c0061a02"} Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.289604 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-config-data\") pod \"barbican-keystone-listener-8754cf966-v85sw\" (UID: \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\") " pod="openstack/barbican-keystone-listener-8754cf966-v85sw" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.289656 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-logs\") pod \"barbican-keystone-listener-8754cf966-v85sw\" (UID: \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\") " pod="openstack/barbican-keystone-listener-8754cf966-v85sw" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.289697 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-internal-tls-certs\") pod \"placement-798bd5bd64-st2b8\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.289727 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-combined-ca-bundle\") pod \"placement-798bd5bd64-st2b8\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.289786 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-combined-ca-bundle\") pod \"barbican-keystone-listener-8754cf966-v85sw\" (UID: \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\") " pod="openstack/barbican-keystone-listener-8754cf966-v85sw" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.289984 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-public-tls-certs\") pod \"placement-798bd5bd64-st2b8\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.290019 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf52h\" (UniqueName: \"kubernetes.io/projected/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-kube-api-access-zf52h\") pod \"barbican-keystone-listener-8754cf966-v85sw\" (UID: \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\") " pod="openstack/barbican-keystone-listener-8754cf966-v85sw" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.290057 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-config-data-custom\") pod \"barbican-keystone-listener-8754cf966-v85sw\" (UID: \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\") " pod="openstack/barbican-keystone-listener-8754cf966-v85sw" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.290094 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96bd7dab-7469-4449-b1dd-dc57aa17c27c-logs\") pod \"placement-798bd5bd64-st2b8\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.290141 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt4ph\" (UniqueName: \"kubernetes.io/projected/96bd7dab-7469-4449-b1dd-dc57aa17c27c-kube-api-access-qt4ph\") pod \"placement-798bd5bd64-st2b8\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.290169 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-scripts\") pod \"placement-798bd5bd64-st2b8\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.290201 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-config-data\") pod \"placement-798bd5bd64-st2b8\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.292543 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-logs\") pod \"barbican-keystone-listener-8754cf966-v85sw\" (UID: \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\") " pod="openstack/barbican-keystone-listener-8754cf966-v85sw" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.305540 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-combined-ca-bundle\") pod \"barbican-keystone-listener-8754cf966-v85sw\" (UID: \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\") " pod="openstack/barbican-keystone-listener-8754cf966-v85sw" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.311613 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-fvtqg"] Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.315160 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.316566 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-config-data-custom\") pod \"barbican-keystone-listener-8754cf966-v85sw\" (UID: \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\") " pod="openstack/barbican-keystone-listener-8754cf966-v85sw" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.319001 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-config-data\") pod \"barbican-keystone-listener-8754cf966-v85sw\" (UID: \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\") " pod="openstack/barbican-keystone-listener-8754cf966-v85sw" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.344146 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-fvtqg"] Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.362374 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-89df85dc-88tt5" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.374087 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf52h\" (UniqueName: \"kubernetes.io/projected/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-kube-api-access-zf52h\") pod \"barbican-keystone-listener-8754cf966-v85sw\" (UID: \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\") " pod="openstack/barbican-keystone-listener-8754cf966-v85sw" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.378236 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8754cf966-v85sw" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.392064 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.397840 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-public-tls-certs\") pod \"placement-798bd5bd64-st2b8\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.397917 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96bd7dab-7469-4449-b1dd-dc57aa17c27c-logs\") pod \"placement-798bd5bd64-st2b8\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.397976 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt4ph\" (UniqueName: \"kubernetes.io/projected/96bd7dab-7469-4449-b1dd-dc57aa17c27c-kube-api-access-qt4ph\") pod \"placement-798bd5bd64-st2b8\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.398011 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-scripts\") pod \"placement-798bd5bd64-st2b8\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.398035 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-config-data\") pod \"placement-798bd5bd64-st2b8\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.398169 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-combined-ca-bundle\") pod \"placement-798bd5bd64-st2b8\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.398237 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-internal-tls-certs\") pod \"placement-798bd5bd64-st2b8\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.412150 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-internal-tls-certs\") pod \"placement-798bd5bd64-st2b8\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.415755 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96bd7dab-7469-4449-b1dd-dc57aa17c27c-logs\") pod \"placement-798bd5bd64-st2b8\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.420566 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-combined-ca-bundle\") pod \"placement-798bd5bd64-st2b8\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.433705 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-scripts\") pod \"placement-798bd5bd64-st2b8\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.433780 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-756987b5cd-6brc9"] Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.435481 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.436469 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-public-tls-certs\") pod \"placement-798bd5bd64-st2b8\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.436751 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-config-data\") pod \"placement-798bd5bd64-st2b8\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.452694 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.466216 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-756987b5cd-6brc9"] Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.473255 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt4ph\" (UniqueName: \"kubernetes.io/projected/96bd7dab-7469-4449-b1dd-dc57aa17c27c-kube-api-access-qt4ph\") pod \"placement-798bd5bd64-st2b8\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.513370 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79mgs\" (UniqueName: \"kubernetes.io/projected/ea7726c3-83d9-4ab1-99a5-7242373754fd-kube-api-access-79mgs\") pod \"dnsmasq-dns-85ff748b95-fvtqg\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.513598 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-dns-svc\") pod \"dnsmasq-dns-85ff748b95-fvtqg\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.513892 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-fvtqg\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.513928 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-config\") pod \"dnsmasq-dns-85ff748b95-fvtqg\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.513969 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-fvtqg\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.513997 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-fvtqg\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.534047 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.578525 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-d9984d7fd-5x6fd"] Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.580621 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-d9984d7fd-5x6fd" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.598095 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-d9984d7fd-5x6fd"] Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.615957 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5faea269-54ff-4f1f-933c-e16bf517fa14-config-data\") pod \"barbican-api-756987b5cd-6brc9\" (UID: \"5faea269-54ff-4f1f-933c-e16bf517fa14\") " pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.616231 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-fvtqg\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.616254 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-config\") pod \"dnsmasq-dns-85ff748b95-fvtqg\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.616276 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-fvtqg\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.616297 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-fvtqg\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.616383 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79mgs\" (UniqueName: \"kubernetes.io/projected/ea7726c3-83d9-4ab1-99a5-7242373754fd-kube-api-access-79mgs\") pod \"dnsmasq-dns-85ff748b95-fvtqg\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.616419 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5faea269-54ff-4f1f-933c-e16bf517fa14-combined-ca-bundle\") pod \"barbican-api-756987b5cd-6brc9\" (UID: \"5faea269-54ff-4f1f-933c-e16bf517fa14\") " pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.616441 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5faea269-54ff-4f1f-933c-e16bf517fa14-logs\") pod \"barbican-api-756987b5cd-6brc9\" (UID: \"5faea269-54ff-4f1f-933c-e16bf517fa14\") " pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.616466 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws6xd\" (UniqueName: \"kubernetes.io/projected/5faea269-54ff-4f1f-933c-e16bf517fa14-kube-api-access-ws6xd\") pod \"barbican-api-756987b5cd-6brc9\" (UID: \"5faea269-54ff-4f1f-933c-e16bf517fa14\") " pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.616490 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5faea269-54ff-4f1f-933c-e16bf517fa14-config-data-custom\") pod \"barbican-api-756987b5cd-6brc9\" (UID: \"5faea269-54ff-4f1f-933c-e16bf517fa14\") " pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.616706 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-dns-svc\") pod \"dnsmasq-dns-85ff748b95-fvtqg\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.618112 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-dns-svc\") pod \"dnsmasq-dns-85ff748b95-fvtqg\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.618408 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-fvtqg\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.619659 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-config\") pod \"dnsmasq-dns-85ff748b95-fvtqg\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.621557 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-fvtqg\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.624177 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-fvtqg\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.659527 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79mgs\" (UniqueName: \"kubernetes.io/projected/ea7726c3-83d9-4ab1-99a5-7242373754fd-kube-api-access-79mgs\") pod \"dnsmasq-dns-85ff748b95-fvtqg\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.678235 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5c84768f67-lv86b"] Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.711205 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c84768f67-lv86b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.733712 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01fcbdc-1303-44a6-95ff-cffdad0e2fa6-config-data\") pod \"barbican-keystone-listener-d9984d7fd-5x6fd\" (UID: \"d01fcbdc-1303-44a6-95ff-cffdad0e2fa6\") " pod="openstack/barbican-keystone-listener-d9984d7fd-5x6fd" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.733813 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d01fcbdc-1303-44a6-95ff-cffdad0e2fa6-logs\") pod \"barbican-keystone-listener-d9984d7fd-5x6fd\" (UID: \"d01fcbdc-1303-44a6-95ff-cffdad0e2fa6\") " pod="openstack/barbican-keystone-listener-d9984d7fd-5x6fd" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.733847 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d01fcbdc-1303-44a6-95ff-cffdad0e2fa6-config-data-custom\") pod \"barbican-keystone-listener-d9984d7fd-5x6fd\" (UID: \"d01fcbdc-1303-44a6-95ff-cffdad0e2fa6\") " pod="openstack/barbican-keystone-listener-d9984d7fd-5x6fd" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.733930 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5faea269-54ff-4f1f-933c-e16bf517fa14-combined-ca-bundle\") pod \"barbican-api-756987b5cd-6brc9\" (UID: \"5faea269-54ff-4f1f-933c-e16bf517fa14\") " pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.733989 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5faea269-54ff-4f1f-933c-e16bf517fa14-logs\") pod \"barbican-api-756987b5cd-6brc9\" (UID: \"5faea269-54ff-4f1f-933c-e16bf517fa14\") " pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.734019 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws6xd\" (UniqueName: \"kubernetes.io/projected/5faea269-54ff-4f1f-933c-e16bf517fa14-kube-api-access-ws6xd\") pod \"barbican-api-756987b5cd-6brc9\" (UID: \"5faea269-54ff-4f1f-933c-e16bf517fa14\") " pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.734044 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5faea269-54ff-4f1f-933c-e16bf517fa14-config-data-custom\") pod \"barbican-api-756987b5cd-6brc9\" (UID: \"5faea269-54ff-4f1f-933c-e16bf517fa14\") " pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.734065 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btbqc\" (UniqueName: \"kubernetes.io/projected/d01fcbdc-1303-44a6-95ff-cffdad0e2fa6-kube-api-access-btbqc\") pod \"barbican-keystone-listener-d9984d7fd-5x6fd\" (UID: \"d01fcbdc-1303-44a6-95ff-cffdad0e2fa6\") " pod="openstack/barbican-keystone-listener-d9984d7fd-5x6fd" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.734117 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01fcbdc-1303-44a6-95ff-cffdad0e2fa6-combined-ca-bundle\") pod \"barbican-keystone-listener-d9984d7fd-5x6fd\" (UID: \"d01fcbdc-1303-44a6-95ff-cffdad0e2fa6\") " pod="openstack/barbican-keystone-listener-d9984d7fd-5x6fd" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.734153 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5faea269-54ff-4f1f-933c-e16bf517fa14-config-data\") pod \"barbican-api-756987b5cd-6brc9\" (UID: \"5faea269-54ff-4f1f-933c-e16bf517fa14\") " pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.737029 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5faea269-54ff-4f1f-933c-e16bf517fa14-logs\") pod \"barbican-api-756987b5cd-6brc9\" (UID: \"5faea269-54ff-4f1f-933c-e16bf517fa14\") " pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.752071 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5faea269-54ff-4f1f-933c-e16bf517fa14-config-data\") pod \"barbican-api-756987b5cd-6brc9\" (UID: \"5faea269-54ff-4f1f-933c-e16bf517fa14\") " pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.761207 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws6xd\" (UniqueName: \"kubernetes.io/projected/5faea269-54ff-4f1f-933c-e16bf517fa14-kube-api-access-ws6xd\") pod \"barbican-api-756987b5cd-6brc9\" (UID: \"5faea269-54ff-4f1f-933c-e16bf517fa14\") " pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.770589 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5faea269-54ff-4f1f-933c-e16bf517fa14-config-data-custom\") pod \"barbican-api-756987b5cd-6brc9\" (UID: \"5faea269-54ff-4f1f-933c-e16bf517fa14\") " pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.778696 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5faea269-54ff-4f1f-933c-e16bf517fa14-combined-ca-bundle\") pod \"barbican-api-756987b5cd-6brc9\" (UID: \"5faea269-54ff-4f1f-933c-e16bf517fa14\") " pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.790367 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5c84768f67-lv86b"] Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.840425 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d01fcbdc-1303-44a6-95ff-cffdad0e2fa6-logs\") pod \"barbican-keystone-listener-d9984d7fd-5x6fd\" (UID: \"d01fcbdc-1303-44a6-95ff-cffdad0e2fa6\") " pod="openstack/barbican-keystone-listener-d9984d7fd-5x6fd" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.840474 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d01fcbdc-1303-44a6-95ff-cffdad0e2fa6-config-data-custom\") pod \"barbican-keystone-listener-d9984d7fd-5x6fd\" (UID: \"d01fcbdc-1303-44a6-95ff-cffdad0e2fa6\") " pod="openstack/barbican-keystone-listener-d9984d7fd-5x6fd" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.840523 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8dvz\" (UniqueName: \"kubernetes.io/projected/55dc3430-223f-4944-9678-6a93b6d69499-kube-api-access-x8dvz\") pod \"barbican-worker-5c84768f67-lv86b\" (UID: \"55dc3430-223f-4944-9678-6a93b6d69499\") " pod="openstack/barbican-worker-5c84768f67-lv86b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.840584 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55dc3430-223f-4944-9678-6a93b6d69499-config-data-custom\") pod \"barbican-worker-5c84768f67-lv86b\" (UID: \"55dc3430-223f-4944-9678-6a93b6d69499\") " pod="openstack/barbican-worker-5c84768f67-lv86b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.840607 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dc3430-223f-4944-9678-6a93b6d69499-combined-ca-bundle\") pod \"barbican-worker-5c84768f67-lv86b\" (UID: \"55dc3430-223f-4944-9678-6a93b6d69499\") " pod="openstack/barbican-worker-5c84768f67-lv86b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.840626 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btbqc\" (UniqueName: \"kubernetes.io/projected/d01fcbdc-1303-44a6-95ff-cffdad0e2fa6-kube-api-access-btbqc\") pod \"barbican-keystone-listener-d9984d7fd-5x6fd\" (UID: \"d01fcbdc-1303-44a6-95ff-cffdad0e2fa6\") " pod="openstack/barbican-keystone-listener-d9984d7fd-5x6fd" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.840670 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55dc3430-223f-4944-9678-6a93b6d69499-logs\") pod \"barbican-worker-5c84768f67-lv86b\" (UID: \"55dc3430-223f-4944-9678-6a93b6d69499\") " pod="openstack/barbican-worker-5c84768f67-lv86b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.840694 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01fcbdc-1303-44a6-95ff-cffdad0e2fa6-combined-ca-bundle\") pod \"barbican-keystone-listener-d9984d7fd-5x6fd\" (UID: \"d01fcbdc-1303-44a6-95ff-cffdad0e2fa6\") " pod="openstack/barbican-keystone-listener-d9984d7fd-5x6fd" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.840741 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01fcbdc-1303-44a6-95ff-cffdad0e2fa6-config-data\") pod \"barbican-keystone-listener-d9984d7fd-5x6fd\" (UID: \"d01fcbdc-1303-44a6-95ff-cffdad0e2fa6\") " pod="openstack/barbican-keystone-listener-d9984d7fd-5x6fd" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.840765 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55dc3430-223f-4944-9678-6a93b6d69499-config-data\") pod \"barbican-worker-5c84768f67-lv86b\" (UID: \"55dc3430-223f-4944-9678-6a93b6d69499\") " pod="openstack/barbican-worker-5c84768f67-lv86b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.881405 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d01fcbdc-1303-44a6-95ff-cffdad0e2fa6-logs\") pod \"barbican-keystone-listener-d9984d7fd-5x6fd\" (UID: \"d01fcbdc-1303-44a6-95ff-cffdad0e2fa6\") " pod="openstack/barbican-keystone-listener-d9984d7fd-5x6fd" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.888020 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-756f867d68-hgndg"] Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.896067 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d01fcbdc-1303-44a6-95ff-cffdad0e2fa6-config-data\") pod \"barbican-keystone-listener-d9984d7fd-5x6fd\" (UID: \"d01fcbdc-1303-44a6-95ff-cffdad0e2fa6\") " pod="openstack/barbican-keystone-listener-d9984d7fd-5x6fd" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.904499 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.906520 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d01fcbdc-1303-44a6-95ff-cffdad0e2fa6-config-data-custom\") pod \"barbican-keystone-listener-d9984d7fd-5x6fd\" (UID: \"d01fcbdc-1303-44a6-95ff-cffdad0e2fa6\") " pod="openstack/barbican-keystone-listener-d9984d7fd-5x6fd" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.910107 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btbqc\" (UniqueName: \"kubernetes.io/projected/d01fcbdc-1303-44a6-95ff-cffdad0e2fa6-kube-api-access-btbqc\") pod \"barbican-keystone-listener-d9984d7fd-5x6fd\" (UID: \"d01fcbdc-1303-44a6-95ff-cffdad0e2fa6\") " pod="openstack/barbican-keystone-listener-d9984d7fd-5x6fd" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.911985 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6dbb74864-cqlt9"] Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.913975 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.915509 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01fcbdc-1303-44a6-95ff-cffdad0e2fa6-combined-ca-bundle\") pod \"barbican-keystone-listener-d9984d7fd-5x6fd\" (UID: \"d01fcbdc-1303-44a6-95ff-cffdad0e2fa6\") " pod="openstack/barbican-keystone-listener-d9984d7fd-5x6fd" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.928544 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6dbb74864-cqlt9"] Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.952873 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.954320 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpbtm\" (UniqueName: \"kubernetes.io/projected/1ebe7703-7d1a-47d0-b3b2-8965365beb56-kube-api-access-gpbtm\") pod \"placement-756f867d68-hgndg\" (UID: \"1ebe7703-7d1a-47d0-b3b2-8965365beb56\") " pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.954684 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ebe7703-7d1a-47d0-b3b2-8965365beb56-logs\") pod \"placement-756f867d68-hgndg\" (UID: \"1ebe7703-7d1a-47d0-b3b2-8965365beb56\") " pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.954789 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ebe7703-7d1a-47d0-b3b2-8965365beb56-combined-ca-bundle\") pod \"placement-756f867d68-hgndg\" (UID: \"1ebe7703-7d1a-47d0-b3b2-8965365beb56\") " pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.954841 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ebe7703-7d1a-47d0-b3b2-8965365beb56-public-tls-certs\") pod \"placement-756f867d68-hgndg\" (UID: \"1ebe7703-7d1a-47d0-b3b2-8965365beb56\") " pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.954896 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8dvz\" (UniqueName: \"kubernetes.io/projected/55dc3430-223f-4944-9678-6a93b6d69499-kube-api-access-x8dvz\") pod \"barbican-worker-5c84768f67-lv86b\" (UID: \"55dc3430-223f-4944-9678-6a93b6d69499\") " pod="openstack/barbican-worker-5c84768f67-lv86b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.955019 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ebe7703-7d1a-47d0-b3b2-8965365beb56-scripts\") pod \"placement-756f867d68-hgndg\" (UID: \"1ebe7703-7d1a-47d0-b3b2-8965365beb56\") " pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.955073 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ebe7703-7d1a-47d0-b3b2-8965365beb56-internal-tls-certs\") pod \"placement-756f867d68-hgndg\" (UID: \"1ebe7703-7d1a-47d0-b3b2-8965365beb56\") " pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.955130 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55dc3430-223f-4944-9678-6a93b6d69499-config-data-custom\") pod \"barbican-worker-5c84768f67-lv86b\" (UID: \"55dc3430-223f-4944-9678-6a93b6d69499\") " pod="openstack/barbican-worker-5c84768f67-lv86b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.955155 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ebe7703-7d1a-47d0-b3b2-8965365beb56-config-data\") pod \"placement-756f867d68-hgndg\" (UID: \"1ebe7703-7d1a-47d0-b3b2-8965365beb56\") " pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.955181 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dc3430-223f-4944-9678-6a93b6d69499-combined-ca-bundle\") pod \"barbican-worker-5c84768f67-lv86b\" (UID: \"55dc3430-223f-4944-9678-6a93b6d69499\") " pod="openstack/barbican-worker-5c84768f67-lv86b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.955307 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55dc3430-223f-4944-9678-6a93b6d69499-logs\") pod \"barbican-worker-5c84768f67-lv86b\" (UID: \"55dc3430-223f-4944-9678-6a93b6d69499\") " pod="openstack/barbican-worker-5c84768f67-lv86b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.955420 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55dc3430-223f-4944-9678-6a93b6d69499-config-data\") pod \"barbican-worker-5c84768f67-lv86b\" (UID: \"55dc3430-223f-4944-9678-6a93b6d69499\") " pod="openstack/barbican-worker-5c84768f67-lv86b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.959372 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55dc3430-223f-4944-9678-6a93b6d69499-logs\") pod \"barbican-worker-5c84768f67-lv86b\" (UID: \"55dc3430-223f-4944-9678-6a93b6d69499\") " pod="openstack/barbican-worker-5c84768f67-lv86b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.961725 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.962179 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55dc3430-223f-4944-9678-6a93b6d69499-config-data-custom\") pod \"barbican-worker-5c84768f67-lv86b\" (UID: \"55dc3430-223f-4944-9678-6a93b6d69499\") " pod="openstack/barbican-worker-5c84768f67-lv86b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.962692 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dc3430-223f-4944-9678-6a93b6d69499-combined-ca-bundle\") pod \"barbican-worker-5c84768f67-lv86b\" (UID: \"55dc3430-223f-4944-9678-6a93b6d69499\") " pod="openstack/barbican-worker-5c84768f67-lv86b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.979315 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8dvz\" (UniqueName: \"kubernetes.io/projected/55dc3430-223f-4944-9678-6a93b6d69499-kube-api-access-x8dvz\") pod \"barbican-worker-5c84768f67-lv86b\" (UID: \"55dc3430-223f-4944-9678-6a93b6d69499\") " pod="openstack/barbican-worker-5c84768f67-lv86b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.980127 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55dc3430-223f-4944-9678-6a93b6d69499-config-data\") pod \"barbican-worker-5c84768f67-lv86b\" (UID: \"55dc3430-223f-4944-9678-6a93b6d69499\") " pod="openstack/barbican-worker-5c84768f67-lv86b" Feb 16 17:21:09 crc kubenswrapper[4870]: I0216 17:21:09.986545 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-756f867d68-hgndg"] Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.057080 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ebe7703-7d1a-47d0-b3b2-8965365beb56-logs\") pod \"placement-756f867d68-hgndg\" (UID: \"1ebe7703-7d1a-47d0-b3b2-8965365beb56\") " pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.057122 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ebe7703-7d1a-47d0-b3b2-8965365beb56-combined-ca-bundle\") pod \"placement-756f867d68-hgndg\" (UID: \"1ebe7703-7d1a-47d0-b3b2-8965365beb56\") " pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.057140 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ebe7703-7d1a-47d0-b3b2-8965365beb56-public-tls-certs\") pod \"placement-756f867d68-hgndg\" (UID: \"1ebe7703-7d1a-47d0-b3b2-8965365beb56\") " pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.057161 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/291e56ef-3b45-4a21-875c-f90daaf45e0b-config-data-custom\") pod \"barbican-api-6dbb74864-cqlt9\" (UID: \"291e56ef-3b45-4a21-875c-f90daaf45e0b\") " pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.057188 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgcsc\" (UniqueName: \"kubernetes.io/projected/291e56ef-3b45-4a21-875c-f90daaf45e0b-kube-api-access-cgcsc\") pod \"barbican-api-6dbb74864-cqlt9\" (UID: \"291e56ef-3b45-4a21-875c-f90daaf45e0b\") " pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.057239 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ebe7703-7d1a-47d0-b3b2-8965365beb56-scripts\") pod \"placement-756f867d68-hgndg\" (UID: \"1ebe7703-7d1a-47d0-b3b2-8965365beb56\") " pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.057261 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ebe7703-7d1a-47d0-b3b2-8965365beb56-internal-tls-certs\") pod \"placement-756f867d68-hgndg\" (UID: \"1ebe7703-7d1a-47d0-b3b2-8965365beb56\") " pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.057290 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ebe7703-7d1a-47d0-b3b2-8965365beb56-config-data\") pod \"placement-756f867d68-hgndg\" (UID: \"1ebe7703-7d1a-47d0-b3b2-8965365beb56\") " pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.057338 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/291e56ef-3b45-4a21-875c-f90daaf45e0b-config-data\") pod \"barbican-api-6dbb74864-cqlt9\" (UID: \"291e56ef-3b45-4a21-875c-f90daaf45e0b\") " pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.057377 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/291e56ef-3b45-4a21-875c-f90daaf45e0b-logs\") pod \"barbican-api-6dbb74864-cqlt9\" (UID: \"291e56ef-3b45-4a21-875c-f90daaf45e0b\") " pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.057415 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291e56ef-3b45-4a21-875c-f90daaf45e0b-combined-ca-bundle\") pod \"barbican-api-6dbb74864-cqlt9\" (UID: \"291e56ef-3b45-4a21-875c-f90daaf45e0b\") " pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.057466 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpbtm\" (UniqueName: \"kubernetes.io/projected/1ebe7703-7d1a-47d0-b3b2-8965365beb56-kube-api-access-gpbtm\") pod \"placement-756f867d68-hgndg\" (UID: \"1ebe7703-7d1a-47d0-b3b2-8965365beb56\") " pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.058224 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ebe7703-7d1a-47d0-b3b2-8965365beb56-logs\") pod \"placement-756f867d68-hgndg\" (UID: \"1ebe7703-7d1a-47d0-b3b2-8965365beb56\") " pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.063198 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ebe7703-7d1a-47d0-b3b2-8965365beb56-internal-tls-certs\") pod \"placement-756f867d68-hgndg\" (UID: \"1ebe7703-7d1a-47d0-b3b2-8965365beb56\") " pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.063514 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ebe7703-7d1a-47d0-b3b2-8965365beb56-scripts\") pod \"placement-756f867d68-hgndg\" (UID: \"1ebe7703-7d1a-47d0-b3b2-8965365beb56\") " pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.067822 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ebe7703-7d1a-47d0-b3b2-8965365beb56-config-data\") pod \"placement-756f867d68-hgndg\" (UID: \"1ebe7703-7d1a-47d0-b3b2-8965365beb56\") " pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.074665 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ebe7703-7d1a-47d0-b3b2-8965365beb56-public-tls-certs\") pod \"placement-756f867d68-hgndg\" (UID: \"1ebe7703-7d1a-47d0-b3b2-8965365beb56\") " pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.075809 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ebe7703-7d1a-47d0-b3b2-8965365beb56-combined-ca-bundle\") pod \"placement-756f867d68-hgndg\" (UID: \"1ebe7703-7d1a-47d0-b3b2-8965365beb56\") " pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.094985 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpbtm\" (UniqueName: \"kubernetes.io/projected/1ebe7703-7d1a-47d0-b3b2-8965365beb56-kube-api-access-gpbtm\") pod \"placement-756f867d68-hgndg\" (UID: \"1ebe7703-7d1a-47d0-b3b2-8965365beb56\") " pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.115045 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-d9984d7fd-5x6fd" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.160238 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/291e56ef-3b45-4a21-875c-f90daaf45e0b-config-data\") pod \"barbican-api-6dbb74864-cqlt9\" (UID: \"291e56ef-3b45-4a21-875c-f90daaf45e0b\") " pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.160309 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/291e56ef-3b45-4a21-875c-f90daaf45e0b-logs\") pod \"barbican-api-6dbb74864-cqlt9\" (UID: \"291e56ef-3b45-4a21-875c-f90daaf45e0b\") " pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.160331 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291e56ef-3b45-4a21-875c-f90daaf45e0b-combined-ca-bundle\") pod \"barbican-api-6dbb74864-cqlt9\" (UID: \"291e56ef-3b45-4a21-875c-f90daaf45e0b\") " pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.160550 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/291e56ef-3b45-4a21-875c-f90daaf45e0b-config-data-custom\") pod \"barbican-api-6dbb74864-cqlt9\" (UID: \"291e56ef-3b45-4a21-875c-f90daaf45e0b\") " pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.160617 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgcsc\" (UniqueName: \"kubernetes.io/projected/291e56ef-3b45-4a21-875c-f90daaf45e0b-kube-api-access-cgcsc\") pod \"barbican-api-6dbb74864-cqlt9\" (UID: \"291e56ef-3b45-4a21-875c-f90daaf45e0b\") " pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.162020 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/291e56ef-3b45-4a21-875c-f90daaf45e0b-logs\") pod \"barbican-api-6dbb74864-cqlt9\" (UID: \"291e56ef-3b45-4a21-875c-f90daaf45e0b\") " pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.165018 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291e56ef-3b45-4a21-875c-f90daaf45e0b-combined-ca-bundle\") pod \"barbican-api-6dbb74864-cqlt9\" (UID: \"291e56ef-3b45-4a21-875c-f90daaf45e0b\") " pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.168034 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/291e56ef-3b45-4a21-875c-f90daaf45e0b-config-data-custom\") pod \"barbican-api-6dbb74864-cqlt9\" (UID: \"291e56ef-3b45-4a21-875c-f90daaf45e0b\") " pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.173817 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/291e56ef-3b45-4a21-875c-f90daaf45e0b-config-data\") pod \"barbican-api-6dbb74864-cqlt9\" (UID: \"291e56ef-3b45-4a21-875c-f90daaf45e0b\") " pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.187470 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgcsc\" (UniqueName: \"kubernetes.io/projected/291e56ef-3b45-4a21-875c-f90daaf45e0b-kube-api-access-cgcsc\") pod \"barbican-api-6dbb74864-cqlt9\" (UID: \"291e56ef-3b45-4a21-875c-f90daaf45e0b\") " pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.208487 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c84768f67-lv86b" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.238481 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.255040 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bcafa21-00bc-4d37-9294-c3f378c43012" path="/var/lib/kubelet/pods/6bcafa21-00bc-4d37-9294-c3f378c43012/volumes" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.261088 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.270080 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00a8e7ab-716d-408e-a531-c49194dca35c","Type":"ContainerStarted","Data":"4cc30dd8003bd348bd90b11cd0f67de61cfe69a43b716226e149cbce32494d8b"} Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.313109 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-568fd566f-ltx6b"] Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.541557 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8754cf966-v85sw"] Feb 16 17:21:10 crc kubenswrapper[4870]: W0216 17:21:10.547479 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05e5e81a_aae4_486a_9804_d9b4b1cd74ee.slice/crio-a420b445b449ba867e7dcc6e634410d90180925f668a645c8deedb5c7feda148 WatchSource:0}: Error finding container a420b445b449ba867e7dcc6e634410d90180925f668a645c8deedb5c7feda148: Status 404 returned error can't find the container with id a420b445b449ba867e7dcc6e634410d90180925f668a645c8deedb5c7feda148 Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.821485 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-89df85dc-88tt5"] Feb 16 17:21:10 crc kubenswrapper[4870]: I0216 17:21:10.835914 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-798bd5bd64-st2b8"] Feb 16 17:21:11 crc kubenswrapper[4870]: I0216 17:21:11.051619 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-fvtqg"] Feb 16 17:21:11 crc kubenswrapper[4870]: W0216 17:21:11.060322 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55dc3430_223f_4944_9678_6a93b6d69499.slice/crio-ff0edc43a7f8d1a6bc95317239095f9e79a95a8d73590705e54d8b344674512d WatchSource:0}: Error finding container ff0edc43a7f8d1a6bc95317239095f9e79a95a8d73590705e54d8b344674512d: Status 404 returned error can't find the container with id ff0edc43a7f8d1a6bc95317239095f9e79a95a8d73590705e54d8b344674512d Feb 16 17:21:11 crc kubenswrapper[4870]: I0216 17:21:11.079800 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5c84768f67-lv86b"] Feb 16 17:21:11 crc kubenswrapper[4870]: I0216 17:21:11.218925 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-756987b5cd-6brc9"] Feb 16 17:21:11 crc kubenswrapper[4870]: I0216 17:21:11.252025 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-756f867d68-hgndg"] Feb 16 17:21:11 crc kubenswrapper[4870]: W0216 17:21:11.326701 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ebe7703_7d1a_47d0_b3b2_8965365beb56.slice/crio-842569a94f4a05132961094dac88d918d021039b4ddc44836fe58ca29a02c69e WatchSource:0}: Error finding container 842569a94f4a05132961094dac88d918d021039b4ddc44836fe58ca29a02c69e: Status 404 returned error can't find the container with id 842569a94f4a05132961094dac88d918d021039b4ddc44836fe58ca29a02c69e Feb 16 17:21:11 crc kubenswrapper[4870]: I0216 17:21:11.336105 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8754cf966-v85sw" event={"ID":"05e5e81a-aae4-486a-9804-d9b4b1cd74ee","Type":"ContainerStarted","Data":"a420b445b449ba867e7dcc6e634410d90180925f668a645c8deedb5c7feda148"} Feb 16 17:21:11 crc kubenswrapper[4870]: I0216 17:21:11.342129 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" event={"ID":"ea7726c3-83d9-4ab1-99a5-7242373754fd","Type":"ContainerStarted","Data":"4dc6e16cba37c5d658630d9dc1180ef52c5ed94ca1e216afa58fae8ab7bef214"} Feb 16 17:21:11 crc kubenswrapper[4870]: I0216 17:21:11.352272 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-798bd5bd64-st2b8" event={"ID":"96bd7dab-7469-4449-b1dd-dc57aa17c27c","Type":"ContainerStarted","Data":"1de2d4c753b27663799ff9e9faba61e66339b178a4eab40c3d98886fb7281f31"} Feb 16 17:21:11 crc kubenswrapper[4870]: I0216 17:21:11.355166 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00a8e7ab-716d-408e-a531-c49194dca35c","Type":"ContainerStarted","Data":"240a7f8cc7a6f44e4eed6515d3df1dd9504c59fa8f55469dcb1496592e3a477d"} Feb 16 17:21:11 crc kubenswrapper[4870]: I0216 17:21:11.357783 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-89df85dc-88tt5" event={"ID":"887feada-bbae-4e0a-bb20-a1e29b65cef9","Type":"ContainerStarted","Data":"635a6fef2a1ddef0d349aaa375bfd9fb8ca1678252abd859a22b96eac903c2f2"} Feb 16 17:21:11 crc kubenswrapper[4870]: E0216 17:21:11.358897 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:21:11 crc kubenswrapper[4870]: E0216 17:21:11.359064 4870 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:21:11 crc kubenswrapper[4870]: E0216 17:21:11.359230 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7b2p7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6hkdm_openstack(34a86750-1fff-4add-8462-7ab805ec7f89): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:21:11 crc kubenswrapper[4870]: E0216 17:21:11.360394 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:21:11 crc kubenswrapper[4870]: I0216 17:21:11.360661 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-756987b5cd-6brc9" event={"ID":"5faea269-54ff-4f1f-933c-e16bf517fa14","Type":"ContainerStarted","Data":"07c7c1fe7509203629102e7346e32ac3306d113b2e6f98cc1c173ab11c816785"} Feb 16 17:21:11 crc kubenswrapper[4870]: I0216 17:21:11.361801 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c84768f67-lv86b" event={"ID":"55dc3430-223f-4944-9678-6a93b6d69499","Type":"ContainerStarted","Data":"ff0edc43a7f8d1a6bc95317239095f9e79a95a8d73590705e54d8b344674512d"} Feb 16 17:21:11 crc kubenswrapper[4870]: I0216 17:21:11.363250 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4mwgd" event={"ID":"9719dd82-cec9-4a56-ae93-29ccca75a3ef","Type":"ContainerStarted","Data":"d99881c559e2858a8ad39267eb69f5c4df47aaf96e0282d2788304dd218584e2"} Feb 16 17:21:11 crc kubenswrapper[4870]: I0216 17:21:11.370611 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5dccdc97-f78d-4a2e-9e18-4956fe9fc535","Type":"ContainerStarted","Data":"d4a81ce2df993166d092df61d9ef89ce07467e6e0f69904c8303f8d50d8733c2"} Feb 16 17:21:11 crc kubenswrapper[4870]: I0216 17:21:11.372765 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-568fd566f-ltx6b" event={"ID":"25726b72-a54a-4482-8440-671195187a49","Type":"ContainerStarted","Data":"55ab2fdc19c6bc282d82955085fa9e75184b8255b5b0fd81c55544975106762c"} Feb 16 17:21:11 crc kubenswrapper[4870]: I0216 17:21:11.372805 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-568fd566f-ltx6b" event={"ID":"25726b72-a54a-4482-8440-671195187a49","Type":"ContainerStarted","Data":"7dea9cc66dce625b8b1963a2410717889a6bbfdd11be72376f3ba6540f184ed7"} Feb 16 17:21:11 crc kubenswrapper[4870]: I0216 17:21:11.373760 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:11 crc kubenswrapper[4870]: I0216 17:21:11.384998 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-4mwgd" podStartSLOduration=4.328104497 podStartE2EDuration="45.384973241s" podCreationTimestamp="2026-02-16 17:20:26 +0000 UTC" firstStartedPulling="2026-02-16 17:20:27.896354048 +0000 UTC m=+1232.379818422" lastFinishedPulling="2026-02-16 17:21:08.953222782 +0000 UTC m=+1273.436687166" observedRunningTime="2026-02-16 17:21:11.378691551 +0000 UTC m=+1275.862155935" watchObservedRunningTime="2026-02-16 17:21:11.384973241 +0000 UTC m=+1275.868437635" Feb 16 17:21:11 crc kubenswrapper[4870]: I0216 17:21:11.444341 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.444312133 podStartE2EDuration="9.444312133s" podCreationTimestamp="2026-02-16 17:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:11.417732485 +0000 UTC m=+1275.901196869" watchObservedRunningTime="2026-02-16 17:21:11.444312133 +0000 UTC m=+1275.927776517" Feb 16 17:21:11 crc kubenswrapper[4870]: I0216 17:21:11.473026 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6dbb74864-cqlt9"] Feb 16 17:21:11 crc kubenswrapper[4870]: I0216 17:21:11.504317 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-568fd566f-ltx6b" podStartSLOduration=3.504290734 podStartE2EDuration="3.504290734s" podCreationTimestamp="2026-02-16 17:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:11.442211913 +0000 UTC m=+1275.925676297" watchObservedRunningTime="2026-02-16 17:21:11.504290734 +0000 UTC m=+1275.987755118" Feb 16 17:21:11 crc kubenswrapper[4870]: I0216 17:21:11.552843 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-d9984d7fd-5x6fd"] Feb 16 17:21:11 crc kubenswrapper[4870]: W0216 17:21:11.618760 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd01fcbdc_1303_44a6_95ff_cffdad0e2fa6.slice/crio-185a711579143bbc459b2b6a288dd96b457e9d50f137ce0947c071177885fa52 WatchSource:0}: Error finding container 185a711579143bbc459b2b6a288dd96b457e9d50f137ce0947c071177885fa52: Status 404 returned error can't find the container with id 185a711579143bbc459b2b6a288dd96b457e9d50f137ce0947c071177885fa52 Feb 16 17:21:12 crc kubenswrapper[4870]: I0216 17:21:12.454610 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-756f867d68-hgndg" event={"ID":"1ebe7703-7d1a-47d0-b3b2-8965365beb56","Type":"ContainerStarted","Data":"842569a94f4a05132961094dac88d918d021039b4ddc44836fe58ca29a02c69e"} Feb 16 17:21:12 crc kubenswrapper[4870]: I0216 17:21:12.474085 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dbb74864-cqlt9" event={"ID":"291e56ef-3b45-4a21-875c-f90daaf45e0b","Type":"ContainerStarted","Data":"7887937d3123398f4cd84ecaf1469495c51d0db32677ddb49b3e65694fa1d308"} Feb 16 17:21:12 crc kubenswrapper[4870]: I0216 17:21:12.478492 4870 generic.go:334] "Generic (PLEG): container finished" podID="ea7726c3-83d9-4ab1-99a5-7242373754fd" containerID="58b07c4144b7431eca2620aff99bd277227d1eb684c8311ab21ee2f18e6906d6" exitCode=0 Feb 16 17:21:12 crc kubenswrapper[4870]: I0216 17:21:12.478737 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" event={"ID":"ea7726c3-83d9-4ab1-99a5-7242373754fd","Type":"ContainerDied","Data":"58b07c4144b7431eca2620aff99bd277227d1eb684c8311ab21ee2f18e6906d6"} Feb 16 17:21:12 crc kubenswrapper[4870]: I0216 17:21:12.482723 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-798bd5bd64-st2b8" event={"ID":"96bd7dab-7469-4449-b1dd-dc57aa17c27c","Type":"ContainerStarted","Data":"065dd553f16ab9b79618d72a86c86f3480fb5e6489a41cc38b8fc388a4af6d86"} Feb 16 17:21:12 crc kubenswrapper[4870]: I0216 17:21:12.494183 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-756987b5cd-6brc9" event={"ID":"5faea269-54ff-4f1f-933c-e16bf517fa14","Type":"ContainerStarted","Data":"7af4892aca934c251eb05093349732bec393c752388ebc4ceddfc05e111a4bb7"} Feb 16 17:21:12 crc kubenswrapper[4870]: I0216 17:21:12.516049 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 17:21:12 crc kubenswrapper[4870]: I0216 17:21:12.516097 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 17:21:12 crc kubenswrapper[4870]: I0216 17:21:12.523961 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d9984d7fd-5x6fd" event={"ID":"d01fcbdc-1303-44a6-95ff-cffdad0e2fa6","Type":"ContainerStarted","Data":"185a711579143bbc459b2b6a288dd96b457e9d50f137ce0947c071177885fa52"} Feb 16 17:21:12 crc kubenswrapper[4870]: I0216 17:21:12.570289 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 17:21:12 crc kubenswrapper[4870]: I0216 17:21:12.581389 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.002126 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6dbb74864-cqlt9"] Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.041398 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-958875f6b-md5pd"] Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.043046 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.044877 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.045207 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.083861 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-958875f6b-md5pd"] Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.171150 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4da9035-cb64-4693-9364-66edc8e1cea6-config-data-custom\") pod \"barbican-api-958875f6b-md5pd\" (UID: \"d4da9035-cb64-4693-9364-66edc8e1cea6\") " pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.171449 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4da9035-cb64-4693-9364-66edc8e1cea6-config-data\") pod \"barbican-api-958875f6b-md5pd\" (UID: \"d4da9035-cb64-4693-9364-66edc8e1cea6\") " pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.171779 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4da9035-cb64-4693-9364-66edc8e1cea6-logs\") pod \"barbican-api-958875f6b-md5pd\" (UID: \"d4da9035-cb64-4693-9364-66edc8e1cea6\") " pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.171930 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xdk2\" (UniqueName: \"kubernetes.io/projected/d4da9035-cb64-4693-9364-66edc8e1cea6-kube-api-access-6xdk2\") pod \"barbican-api-958875f6b-md5pd\" (UID: \"d4da9035-cb64-4693-9364-66edc8e1cea6\") " pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.172390 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4da9035-cb64-4693-9364-66edc8e1cea6-combined-ca-bundle\") pod \"barbican-api-958875f6b-md5pd\" (UID: \"d4da9035-cb64-4693-9364-66edc8e1cea6\") " pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.172488 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4da9035-cb64-4693-9364-66edc8e1cea6-internal-tls-certs\") pod \"barbican-api-958875f6b-md5pd\" (UID: \"d4da9035-cb64-4693-9364-66edc8e1cea6\") " pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.172773 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4da9035-cb64-4693-9364-66edc8e1cea6-public-tls-certs\") pod \"barbican-api-958875f6b-md5pd\" (UID: \"d4da9035-cb64-4693-9364-66edc8e1cea6\") " pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.274798 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xdk2\" (UniqueName: \"kubernetes.io/projected/d4da9035-cb64-4693-9364-66edc8e1cea6-kube-api-access-6xdk2\") pod \"barbican-api-958875f6b-md5pd\" (UID: \"d4da9035-cb64-4693-9364-66edc8e1cea6\") " pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.274967 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4da9035-cb64-4693-9364-66edc8e1cea6-combined-ca-bundle\") pod \"barbican-api-958875f6b-md5pd\" (UID: \"d4da9035-cb64-4693-9364-66edc8e1cea6\") " pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.275024 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4da9035-cb64-4693-9364-66edc8e1cea6-internal-tls-certs\") pod \"barbican-api-958875f6b-md5pd\" (UID: \"d4da9035-cb64-4693-9364-66edc8e1cea6\") " pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.275073 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4da9035-cb64-4693-9364-66edc8e1cea6-public-tls-certs\") pod \"barbican-api-958875f6b-md5pd\" (UID: \"d4da9035-cb64-4693-9364-66edc8e1cea6\") " pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.275111 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4da9035-cb64-4693-9364-66edc8e1cea6-config-data-custom\") pod \"barbican-api-958875f6b-md5pd\" (UID: \"d4da9035-cb64-4693-9364-66edc8e1cea6\") " pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.275323 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4da9035-cb64-4693-9364-66edc8e1cea6-config-data\") pod \"barbican-api-958875f6b-md5pd\" (UID: \"d4da9035-cb64-4693-9364-66edc8e1cea6\") " pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.275389 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4da9035-cb64-4693-9364-66edc8e1cea6-logs\") pod \"barbican-api-958875f6b-md5pd\" (UID: \"d4da9035-cb64-4693-9364-66edc8e1cea6\") " pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.279898 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4da9035-cb64-4693-9364-66edc8e1cea6-config-data-custom\") pod \"barbican-api-958875f6b-md5pd\" (UID: \"d4da9035-cb64-4693-9364-66edc8e1cea6\") " pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.280744 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4da9035-cb64-4693-9364-66edc8e1cea6-internal-tls-certs\") pod \"barbican-api-958875f6b-md5pd\" (UID: \"d4da9035-cb64-4693-9364-66edc8e1cea6\") " pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.281156 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4da9035-cb64-4693-9364-66edc8e1cea6-logs\") pod \"barbican-api-958875f6b-md5pd\" (UID: \"d4da9035-cb64-4693-9364-66edc8e1cea6\") " pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.283583 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4da9035-cb64-4693-9364-66edc8e1cea6-combined-ca-bundle\") pod \"barbican-api-958875f6b-md5pd\" (UID: \"d4da9035-cb64-4693-9364-66edc8e1cea6\") " pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.285253 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4da9035-cb64-4693-9364-66edc8e1cea6-public-tls-certs\") pod \"barbican-api-958875f6b-md5pd\" (UID: \"d4da9035-cb64-4693-9364-66edc8e1cea6\") " pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.296762 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4da9035-cb64-4693-9364-66edc8e1cea6-config-data\") pod \"barbican-api-958875f6b-md5pd\" (UID: \"d4da9035-cb64-4693-9364-66edc8e1cea6\") " pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.303617 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xdk2\" (UniqueName: \"kubernetes.io/projected/d4da9035-cb64-4693-9364-66edc8e1cea6-kube-api-access-6xdk2\") pod \"barbican-api-958875f6b-md5pd\" (UID: \"d4da9035-cb64-4693-9364-66edc8e1cea6\") " pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.369546 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.536972 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-798bd5bd64-st2b8" event={"ID":"96bd7dab-7469-4449-b1dd-dc57aa17c27c","Type":"ContainerStarted","Data":"a6cee6c28b7c7fc1131e837e8c8819fcd918c71ce285cd767866175559afc576"} Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.537142 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.539811 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00a8e7ab-716d-408e-a531-c49194dca35c","Type":"ContainerStarted","Data":"dc7e3d4994866d266c53cd63755dd24b12c7fa513b569229e8dcc1e8bca43594"} Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.543836 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-756f867d68-hgndg" event={"ID":"1ebe7703-7d1a-47d0-b3b2-8965365beb56","Type":"ContainerStarted","Data":"57ee7b7d7399eee629e1b53b30f0a3b85b97c9437d65cb432dbdba558d6cad01"} Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.546529 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dbb74864-cqlt9" event={"ID":"291e56ef-3b45-4a21-875c-f90daaf45e0b","Type":"ContainerStarted","Data":"9b695a6011d66493e4f8cc271c1a6eea680f0af61579eaf18507cf4124d6731c"} Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.547000 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.547500 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.563596 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-798bd5bd64-st2b8" podStartSLOduration=5.563573969 podStartE2EDuration="5.563573969s" podCreationTimestamp="2026-02-16 17:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:13.560204953 +0000 UTC m=+1278.043669337" watchObservedRunningTime="2026-02-16 17:21:13.563573969 +0000 UTC m=+1278.047038353" Feb 16 17:21:13 crc kubenswrapper[4870]: I0216 17:21:13.595401 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.595387307 podStartE2EDuration="11.595387307s" podCreationTimestamp="2026-02-16 17:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:13.592176345 +0000 UTC m=+1278.075640729" watchObservedRunningTime="2026-02-16 17:21:13.595387307 +0000 UTC m=+1278.078851691" Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.516763 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-958875f6b-md5pd"] Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.635275 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-756987b5cd-6brc9" event={"ID":"5faea269-54ff-4f1f-933c-e16bf517fa14","Type":"ContainerStarted","Data":"8fa18c9a114f0fb75e3caefb132c456e1bf47ee4a9a78ceed4c73727ffddda1a"} Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.636487 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.636551 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.677143 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-756987b5cd-6brc9" podStartSLOduration=5.677121081 podStartE2EDuration="5.677121081s" podCreationTimestamp="2026-02-16 17:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:14.660375173 +0000 UTC m=+1279.143839557" watchObservedRunningTime="2026-02-16 17:21:14.677121081 +0000 UTC m=+1279.160585465" Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.687301 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c84768f67-lv86b" event={"ID":"55dc3430-223f-4944-9678-6a93b6d69499","Type":"ContainerStarted","Data":"c83025cdfcaf3387108daf82b7eaa38d350886f14f0cf8bcbd25b20b1412ba8f"} Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.719301 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-756f867d68-hgndg" event={"ID":"1ebe7703-7d1a-47d0-b3b2-8965365beb56","Type":"ContainerStarted","Data":"22390200518b8061239a155e4d7c265e06a0d551ad27b46c2ff5e40eae0c3303"} Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.719805 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.719852 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.741207 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8754cf966-v85sw" event={"ID":"05e5e81a-aae4-486a-9804-d9b4b1cd74ee","Type":"ContainerStarted","Data":"6f8428e9aea9c98f86b3576addbfbfa204755323b1bebec662d597552e75a62d"} Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.743663 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dbb74864-cqlt9" event={"ID":"291e56ef-3b45-4a21-875c-f90daaf45e0b","Type":"ContainerStarted","Data":"d2e81256ba3cf850d427a2f81fa1b2115f18a2b6026e37ae0d638344cf6afa2f"} Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.743860 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6dbb74864-cqlt9" podUID="291e56ef-3b45-4a21-875c-f90daaf45e0b" containerName="barbican-api-log" containerID="cri-o://9b695a6011d66493e4f8cc271c1a6eea680f0af61579eaf18507cf4124d6731c" gracePeriod=30 Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.744438 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.744483 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.744528 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6dbb74864-cqlt9" podUID="291e56ef-3b45-4a21-875c-f90daaf45e0b" containerName="barbican-api" containerID="cri-o://d2e81256ba3cf850d427a2f81fa1b2115f18a2b6026e37ae0d638344cf6afa2f" gracePeriod=30 Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.758256 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-756f867d68-hgndg" podStartSLOduration=5.758229824 podStartE2EDuration="5.758229824s" podCreationTimestamp="2026-02-16 17:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:14.755371572 +0000 UTC m=+1279.238835956" watchObservedRunningTime="2026-02-16 17:21:14.758229824 +0000 UTC m=+1279.241694218" Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.766017 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" event={"ID":"ea7726c3-83d9-4ab1-99a5-7242373754fd","Type":"ContainerStarted","Data":"4f8aba9f618f1a728a1043f77465183f4a7ce627dd8080c2b3919b93c828441b"} Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.766264 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.772452 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d9984d7fd-5x6fd" event={"ID":"d01fcbdc-1303-44a6-95ff-cffdad0e2fa6","Type":"ContainerStarted","Data":"42903aae8e6de79f4541cd455ddbc4056cb43a101f157d9e968d6de822055f2c"} Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.773621 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.810387 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6dbb74864-cqlt9" podStartSLOduration=5.810359381 podStartE2EDuration="5.810359381s" podCreationTimestamp="2026-02-16 17:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:14.795488077 +0000 UTC m=+1279.278952461" watchObservedRunningTime="2026-02-16 17:21:14.810359381 +0000 UTC m=+1279.293823765" Feb 16 17:21:14 crc kubenswrapper[4870]: I0216 17:21:14.842360 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" podStartSLOduration=5.8423414430000005 podStartE2EDuration="5.842341443s" podCreationTimestamp="2026-02-16 17:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:14.840387767 +0000 UTC m=+1279.323852151" watchObservedRunningTime="2026-02-16 17:21:14.842341443 +0000 UTC m=+1279.325805837" Feb 16 17:21:15 crc kubenswrapper[4870]: E0216 17:21:15.235763 4870 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod291e56ef_3b45_4a21_875c_f90daaf45e0b.slice/crio-conmon-9b695a6011d66493e4f8cc271c1a6eea680f0af61579eaf18507cf4124d6731c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod291e56ef_3b45_4a21_875c_f90daaf45e0b.slice/crio-9b695a6011d66493e4f8cc271c1a6eea680f0af61579eaf18507cf4124d6731c.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:21:15 crc kubenswrapper[4870]: I0216 17:21:15.823519 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-958875f6b-md5pd" event={"ID":"d4da9035-cb64-4693-9364-66edc8e1cea6","Type":"ContainerStarted","Data":"05bfb193e5e02ba6690a5f9f32d0581445f8e66e563b7efeba2f211b42d27591"} Feb 16 17:21:15 crc kubenswrapper[4870]: I0216 17:21:15.824201 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-958875f6b-md5pd" event={"ID":"d4da9035-cb64-4693-9364-66edc8e1cea6","Type":"ContainerStarted","Data":"c8386cd0f8c743deb1658f97398d6a54e742209dd3f2922d1531bbc46d550d89"} Feb 16 17:21:15 crc kubenswrapper[4870]: I0216 17:21:15.824220 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-958875f6b-md5pd" event={"ID":"d4da9035-cb64-4693-9364-66edc8e1cea6","Type":"ContainerStarted","Data":"ed42013804daffb96d109c06945d303ab77bb4f2f6cacc9d95fa1fe1c650c007"} Feb 16 17:21:15 crc kubenswrapper[4870]: I0216 17:21:15.825528 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:15 crc kubenswrapper[4870]: I0216 17:21:15.825556 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:15 crc kubenswrapper[4870]: I0216 17:21:15.865512 4870 generic.go:334] "Generic (PLEG): container finished" podID="291e56ef-3b45-4a21-875c-f90daaf45e0b" containerID="9b695a6011d66493e4f8cc271c1a6eea680f0af61579eaf18507cf4124d6731c" exitCode=143 Feb 16 17:21:15 crc kubenswrapper[4870]: I0216 17:21:15.865606 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dbb74864-cqlt9" event={"ID":"291e56ef-3b45-4a21-875c-f90daaf45e0b","Type":"ContainerDied","Data":"9b695a6011d66493e4f8cc271c1a6eea680f0af61579eaf18507cf4124d6731c"} Feb 16 17:21:15 crc kubenswrapper[4870]: I0216 17:21:15.871543 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-958875f6b-md5pd" podStartSLOduration=2.871523938 podStartE2EDuration="2.871523938s" podCreationTimestamp="2026-02-16 17:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:15.851806506 +0000 UTC m=+1280.335270890" watchObservedRunningTime="2026-02-16 17:21:15.871523938 +0000 UTC m=+1280.354988322" Feb 16 17:21:15 crc kubenswrapper[4870]: I0216 17:21:15.888523 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-89df85dc-88tt5" event={"ID":"887feada-bbae-4e0a-bb20-a1e29b65cef9","Type":"ContainerStarted","Data":"cd2ffb60255c8f180e84be9685a6e3b208fb82fbd18af1b1b3926143a89d0898"} Feb 16 17:21:15 crc kubenswrapper[4870]: I0216 17:21:15.888599 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-89df85dc-88tt5" event={"ID":"887feada-bbae-4e0a-bb20-a1e29b65cef9","Type":"ContainerStarted","Data":"906a578fde2273c1d7c29e827e12a3da4b2a49bff2a17618e04dc44ca6d52696"} Feb 16 17:21:15 crc kubenswrapper[4870]: I0216 17:21:15.912670 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-89df85dc-88tt5" podStartSLOduration=4.723669924 podStartE2EDuration="7.912645031s" podCreationTimestamp="2026-02-16 17:21:08 +0000 UTC" firstStartedPulling="2026-02-16 17:21:10.832858433 +0000 UTC m=+1275.316322817" lastFinishedPulling="2026-02-16 17:21:14.02183354 +0000 UTC m=+1278.505297924" observedRunningTime="2026-02-16 17:21:15.910649594 +0000 UTC m=+1280.394113978" watchObservedRunningTime="2026-02-16 17:21:15.912645031 +0000 UTC m=+1280.396109415" Feb 16 17:21:15 crc kubenswrapper[4870]: I0216 17:21:15.914321 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d9984d7fd-5x6fd" event={"ID":"d01fcbdc-1303-44a6-95ff-cffdad0e2fa6","Type":"ContainerStarted","Data":"dc3738e8fd32545d6a5c9d08010b9170b3e38cd5a622976e6a9253ecba2bd04b"} Feb 16 17:21:15 crc kubenswrapper[4870]: I0216 17:21:15.933773 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c84768f67-lv86b" event={"ID":"55dc3430-223f-4944-9678-6a93b6d69499","Type":"ContainerStarted","Data":"20c2e21c04b178c140326656c408b9209b6b12cec9a9d914eeb98b33595560d4"} Feb 16 17:21:15 crc kubenswrapper[4870]: I0216 17:21:15.952584 4870 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:21:15 crc kubenswrapper[4870]: I0216 17:21:15.953559 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8754cf966-v85sw" event={"ID":"05e5e81a-aae4-486a-9804-d9b4b1cd74ee","Type":"ContainerStarted","Data":"fcf6688399f7079301517a4a3193ed53399ddc1927db4c994cda8d03ffa98319"} Feb 16 17:21:15 crc kubenswrapper[4870]: I0216 17:21:15.959542 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-d9984d7fd-5x6fd" podStartSLOduration=4.591096816 podStartE2EDuration="6.959516578s" podCreationTimestamp="2026-02-16 17:21:09 +0000 UTC" firstStartedPulling="2026-02-16 17:21:11.626465679 +0000 UTC m=+1276.109930063" lastFinishedPulling="2026-02-16 17:21:13.994885441 +0000 UTC m=+1278.478349825" observedRunningTime="2026-02-16 17:21:15.941413332 +0000 UTC m=+1280.424877716" watchObservedRunningTime="2026-02-16 17:21:15.959516578 +0000 UTC m=+1280.442980962" Feb 16 17:21:15 crc kubenswrapper[4870]: I0216 17:21:15.982291 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5c84768f67-lv86b" podStartSLOduration=4.06853397 podStartE2EDuration="6.982273077s" podCreationTimestamp="2026-02-16 17:21:09 +0000 UTC" firstStartedPulling="2026-02-16 17:21:11.062722809 +0000 UTC m=+1275.546187193" lastFinishedPulling="2026-02-16 17:21:13.976461916 +0000 UTC m=+1278.459926300" observedRunningTime="2026-02-16 17:21:15.969012369 +0000 UTC m=+1280.452476753" watchObservedRunningTime="2026-02-16 17:21:15.982273077 +0000 UTC m=+1280.465737461" Feb 16 17:21:16 crc kubenswrapper[4870]: I0216 17:21:16.069767 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-8754cf966-v85sw" podStartSLOduration=4.643514707 podStartE2EDuration="8.069746402s" podCreationTimestamp="2026-02-16 17:21:08 +0000 UTC" firstStartedPulling="2026-02-16 17:21:10.549973524 +0000 UTC m=+1275.033437908" lastFinishedPulling="2026-02-16 17:21:13.976205209 +0000 UTC m=+1278.459669603" observedRunningTime="2026-02-16 17:21:16.001196287 +0000 UTC m=+1280.484660671" watchObservedRunningTime="2026-02-16 17:21:16.069746402 +0000 UTC m=+1280.553210786" Feb 16 17:21:16 crc kubenswrapper[4870]: I0216 17:21:16.078232 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-89df85dc-88tt5"] Feb 16 17:21:16 crc kubenswrapper[4870]: I0216 17:21:16.106790 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-8754cf966-v85sw"] Feb 16 17:21:16 crc kubenswrapper[4870]: I0216 17:21:16.720781 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 17:21:17 crc kubenswrapper[4870]: I0216 17:21:17.984319 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-8754cf966-v85sw" podUID="05e5e81a-aae4-486a-9804-d9b4b1cd74ee" containerName="barbican-keystone-listener-log" containerID="cri-o://6f8428e9aea9c98f86b3576addbfbfa204755323b1bebec662d597552e75a62d" gracePeriod=30 Feb 16 17:21:17 crc kubenswrapper[4870]: I0216 17:21:17.984724 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-89df85dc-88tt5" podUID="887feada-bbae-4e0a-bb20-a1e29b65cef9" containerName="barbican-worker-log" containerID="cri-o://906a578fde2273c1d7c29e827e12a3da4b2a49bff2a17618e04dc44ca6d52696" gracePeriod=30 Feb 16 17:21:17 crc kubenswrapper[4870]: I0216 17:21:17.984852 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-89df85dc-88tt5" podUID="887feada-bbae-4e0a-bb20-a1e29b65cef9" containerName="barbican-worker" containerID="cri-o://cd2ffb60255c8f180e84be9685a6e3b208fb82fbd18af1b1b3926143a89d0898" gracePeriod=30 Feb 16 17:21:17 crc kubenswrapper[4870]: I0216 17:21:17.984979 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-8754cf966-v85sw" podUID="05e5e81a-aae4-486a-9804-d9b4b1cd74ee" containerName="barbican-keystone-listener" containerID="cri-o://fcf6688399f7079301517a4a3193ed53399ddc1927db4c994cda8d03ffa98319" gracePeriod=30 Feb 16 17:21:18 crc kubenswrapper[4870]: I0216 17:21:18.345479 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 17:21:19 crc kubenswrapper[4870]: I0216 17:21:19.006292 4870 generic.go:334] "Generic (PLEG): container finished" podID="9719dd82-cec9-4a56-ae93-29ccca75a3ef" containerID="d99881c559e2858a8ad39267eb69f5c4df47aaf96e0282d2788304dd218584e2" exitCode=0 Feb 16 17:21:19 crc kubenswrapper[4870]: I0216 17:21:19.006412 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4mwgd" event={"ID":"9719dd82-cec9-4a56-ae93-29ccca75a3ef","Type":"ContainerDied","Data":"d99881c559e2858a8ad39267eb69f5c4df47aaf96e0282d2788304dd218584e2"} Feb 16 17:21:19 crc kubenswrapper[4870]: I0216 17:21:19.010975 4870 generic.go:334] "Generic (PLEG): container finished" podID="05e5e81a-aae4-486a-9804-d9b4b1cd74ee" containerID="fcf6688399f7079301517a4a3193ed53399ddc1927db4c994cda8d03ffa98319" exitCode=0 Feb 16 17:21:19 crc kubenswrapper[4870]: I0216 17:21:19.011112 4870 generic.go:334] "Generic (PLEG): container finished" podID="05e5e81a-aae4-486a-9804-d9b4b1cd74ee" containerID="6f8428e9aea9c98f86b3576addbfbfa204755323b1bebec662d597552e75a62d" exitCode=143 Feb 16 17:21:19 crc kubenswrapper[4870]: I0216 17:21:19.011262 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8754cf966-v85sw" event={"ID":"05e5e81a-aae4-486a-9804-d9b4b1cd74ee","Type":"ContainerDied","Data":"fcf6688399f7079301517a4a3193ed53399ddc1927db4c994cda8d03ffa98319"} Feb 16 17:21:19 crc kubenswrapper[4870]: I0216 17:21:19.011331 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8754cf966-v85sw" event={"ID":"05e5e81a-aae4-486a-9804-d9b4b1cd74ee","Type":"ContainerDied","Data":"6f8428e9aea9c98f86b3576addbfbfa204755323b1bebec662d597552e75a62d"} Feb 16 17:21:19 crc kubenswrapper[4870]: I0216 17:21:19.013843 4870 generic.go:334] "Generic (PLEG): container finished" podID="887feada-bbae-4e0a-bb20-a1e29b65cef9" containerID="cd2ffb60255c8f180e84be9685a6e3b208fb82fbd18af1b1b3926143a89d0898" exitCode=0 Feb 16 17:21:19 crc kubenswrapper[4870]: I0216 17:21:19.014043 4870 generic.go:334] "Generic (PLEG): container finished" podID="887feada-bbae-4e0a-bb20-a1e29b65cef9" containerID="906a578fde2273c1d7c29e827e12a3da4b2a49bff2a17618e04dc44ca6d52696" exitCode=143 Feb 16 17:21:19 crc kubenswrapper[4870]: I0216 17:21:19.014015 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-89df85dc-88tt5" event={"ID":"887feada-bbae-4e0a-bb20-a1e29b65cef9","Type":"ContainerDied","Data":"cd2ffb60255c8f180e84be9685a6e3b208fb82fbd18af1b1b3926143a89d0898"} Feb 16 17:21:19 crc kubenswrapper[4870]: I0216 17:21:19.014255 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-89df85dc-88tt5" event={"ID":"887feada-bbae-4e0a-bb20-a1e29b65cef9","Type":"ContainerDied","Data":"906a578fde2273c1d7c29e827e12a3da4b2a49bff2a17618e04dc44ca6d52696"} Feb 16 17:21:19 crc kubenswrapper[4870]: I0216 17:21:19.951270 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:20 crc kubenswrapper[4870]: I0216 17:21:20.051477 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-9kr25"] Feb 16 17:21:20 crc kubenswrapper[4870]: I0216 17:21:20.051892 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-9kr25" podUID="b8e4be99-05cc-436e-9634-b6302dc49fa5" containerName="dnsmasq-dns" containerID="cri-o://2debca4bf0415947ae6bcc801479db9bfed32100a26e90a6a83ee654f99cc8a2" gracePeriod=10 Feb 16 17:21:21 crc kubenswrapper[4870]: I0216 17:21:21.086661 4870 generic.go:334] "Generic (PLEG): container finished" podID="b8e4be99-05cc-436e-9634-b6302dc49fa5" containerID="2debca4bf0415947ae6bcc801479db9bfed32100a26e90a6a83ee654f99cc8a2" exitCode=0 Feb 16 17:21:21 crc kubenswrapper[4870]: I0216 17:21:21.086962 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-9kr25" event={"ID":"b8e4be99-05cc-436e-9634-b6302dc49fa5","Type":"ContainerDied","Data":"2debca4bf0415947ae6bcc801479db9bfed32100a26e90a6a83ee654f99cc8a2"} Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.071575 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.082389 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-686cd77f6d-7xrcx" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.103245 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.280710 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-9kr25" podUID="b8e4be99-05cc-436e-9634-b6302dc49fa5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: connect: connection refused" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.391938 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-85848b6785-nrbf6"] Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.392238 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-85848b6785-nrbf6" podUID="b846df4e-a215-42b4-a15d-08eea2d03652" containerName="neutron-api" containerID="cri-o://e919b9cd5390cb83a421aa35ab78f2118fcebad544b96e741075fbd5400cce7d" gracePeriod=30 Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.392344 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-85848b6785-nrbf6" podUID="b846df4e-a215-42b4-a15d-08eea2d03652" containerName="neutron-httpd" containerID="cri-o://1498d0e1e74cabb6b0bc8a4d251685ed6a76e0a95d68f461e07ce9f1bbdbbd70" gracePeriod=30 Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.444004 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-67bf48b897-78ftj"] Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.445816 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.471808 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67bf48b897-78ftj"] Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.478325 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.509181 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.509231 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.515053 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.521536 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-85848b6785-nrbf6" podUID="b846df4e-a215-42b4-a15d-08eea2d03652" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.171:9696/\": read tcp 10.217.0.2:60122->10.217.0.171:9696: read: connection reset by peer" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.579582 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ec0f9b7-8e31-4b80-bb3b-5245632bc524-ovndb-tls-certs\") pod \"neutron-67bf48b897-78ftj\" (UID: \"3ec0f9b7-8e31-4b80-bb3b-5245632bc524\") " pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.579947 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-726hz\" (UniqueName: \"kubernetes.io/projected/3ec0f9b7-8e31-4b80-bb3b-5245632bc524-kube-api-access-726hz\") pod \"neutron-67bf48b897-78ftj\" (UID: \"3ec0f9b7-8e31-4b80-bb3b-5245632bc524\") " pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.580055 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ec0f9b7-8e31-4b80-bb3b-5245632bc524-public-tls-certs\") pod \"neutron-67bf48b897-78ftj\" (UID: \"3ec0f9b7-8e31-4b80-bb3b-5245632bc524\") " pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.580096 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec0f9b7-8e31-4b80-bb3b-5245632bc524-combined-ca-bundle\") pod \"neutron-67bf48b897-78ftj\" (UID: \"3ec0f9b7-8e31-4b80-bb3b-5245632bc524\") " pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.580154 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ec0f9b7-8e31-4b80-bb3b-5245632bc524-config\") pod \"neutron-67bf48b897-78ftj\" (UID: \"3ec0f9b7-8e31-4b80-bb3b-5245632bc524\") " pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.580240 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ec0f9b7-8e31-4b80-bb3b-5245632bc524-internal-tls-certs\") pod \"neutron-67bf48b897-78ftj\" (UID: \"3ec0f9b7-8e31-4b80-bb3b-5245632bc524\") " pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.580282 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3ec0f9b7-8e31-4b80-bb3b-5245632bc524-httpd-config\") pod \"neutron-67bf48b897-78ftj\" (UID: \"3ec0f9b7-8e31-4b80-bb3b-5245632bc524\") " pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.581456 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.585245 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.682323 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ec0f9b7-8e31-4b80-bb3b-5245632bc524-public-tls-certs\") pod \"neutron-67bf48b897-78ftj\" (UID: \"3ec0f9b7-8e31-4b80-bb3b-5245632bc524\") " pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.682395 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec0f9b7-8e31-4b80-bb3b-5245632bc524-combined-ca-bundle\") pod \"neutron-67bf48b897-78ftj\" (UID: \"3ec0f9b7-8e31-4b80-bb3b-5245632bc524\") " pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.682478 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ec0f9b7-8e31-4b80-bb3b-5245632bc524-config\") pod \"neutron-67bf48b897-78ftj\" (UID: \"3ec0f9b7-8e31-4b80-bb3b-5245632bc524\") " pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.682578 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ec0f9b7-8e31-4b80-bb3b-5245632bc524-internal-tls-certs\") pod \"neutron-67bf48b897-78ftj\" (UID: \"3ec0f9b7-8e31-4b80-bb3b-5245632bc524\") " pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.682623 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3ec0f9b7-8e31-4b80-bb3b-5245632bc524-httpd-config\") pod \"neutron-67bf48b897-78ftj\" (UID: \"3ec0f9b7-8e31-4b80-bb3b-5245632bc524\") " pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.682780 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ec0f9b7-8e31-4b80-bb3b-5245632bc524-ovndb-tls-certs\") pod \"neutron-67bf48b897-78ftj\" (UID: \"3ec0f9b7-8e31-4b80-bb3b-5245632bc524\") " pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.682811 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-726hz\" (UniqueName: \"kubernetes.io/projected/3ec0f9b7-8e31-4b80-bb3b-5245632bc524-kube-api-access-726hz\") pod \"neutron-67bf48b897-78ftj\" (UID: \"3ec0f9b7-8e31-4b80-bb3b-5245632bc524\") " pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.690018 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3ec0f9b7-8e31-4b80-bb3b-5245632bc524-httpd-config\") pod \"neutron-67bf48b897-78ftj\" (UID: \"3ec0f9b7-8e31-4b80-bb3b-5245632bc524\") " pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.693945 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec0f9b7-8e31-4b80-bb3b-5245632bc524-combined-ca-bundle\") pod \"neutron-67bf48b897-78ftj\" (UID: \"3ec0f9b7-8e31-4b80-bb3b-5245632bc524\") " pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.694625 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ec0f9b7-8e31-4b80-bb3b-5245632bc524-public-tls-certs\") pod \"neutron-67bf48b897-78ftj\" (UID: \"3ec0f9b7-8e31-4b80-bb3b-5245632bc524\") " pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.694913 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ec0f9b7-8e31-4b80-bb3b-5245632bc524-internal-tls-certs\") pod \"neutron-67bf48b897-78ftj\" (UID: \"3ec0f9b7-8e31-4b80-bb3b-5245632bc524\") " pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.695474 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ec0f9b7-8e31-4b80-bb3b-5245632bc524-ovndb-tls-certs\") pod \"neutron-67bf48b897-78ftj\" (UID: \"3ec0f9b7-8e31-4b80-bb3b-5245632bc524\") " pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.701474 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ec0f9b7-8e31-4b80-bb3b-5245632bc524-config\") pod \"neutron-67bf48b897-78ftj\" (UID: \"3ec0f9b7-8e31-4b80-bb3b-5245632bc524\") " pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.711643 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-726hz\" (UniqueName: \"kubernetes.io/projected/3ec0f9b7-8e31-4b80-bb3b-5245632bc524-kube-api-access-726hz\") pod \"neutron-67bf48b897-78ftj\" (UID: \"3ec0f9b7-8e31-4b80-bb3b-5245632bc524\") " pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.765105 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.958730 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8754cf966-v85sw" Feb 16 17:21:22 crc kubenswrapper[4870]: I0216 17:21:22.964238 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.094427 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-config-data\") pod \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.094497 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-config-data-custom\") pod \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\" (UID: \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\") " Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.094538 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-scripts\") pod \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.094555 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9719dd82-cec9-4a56-ae93-29ccca75a3ef-etc-machine-id\") pod \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.094615 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-logs\") pod \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\" (UID: \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\") " Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.094643 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-config-data\") pod \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\" (UID: \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\") " Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.094682 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-combined-ca-bundle\") pod \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.094721 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf52h\" (UniqueName: \"kubernetes.io/projected/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-kube-api-access-zf52h\") pod \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\" (UID: \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\") " Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.094812 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-combined-ca-bundle\") pod \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\" (UID: \"05e5e81a-aae4-486a-9804-d9b4b1cd74ee\") " Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.094832 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-db-sync-config-data\") pod \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.094848 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhj74\" (UniqueName: \"kubernetes.io/projected/9719dd82-cec9-4a56-ae93-29ccca75a3ef-kube-api-access-fhj74\") pod \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\" (UID: \"9719dd82-cec9-4a56-ae93-29ccca75a3ef\") " Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.096159 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-logs" (OuterVolumeSpecName: "logs") pod "05e5e81a-aae4-486a-9804-d9b4b1cd74ee" (UID: "05e5e81a-aae4-486a-9804-d9b4b1cd74ee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.108364 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9719dd82-cec9-4a56-ae93-29ccca75a3ef-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9719dd82-cec9-4a56-ae93-29ccca75a3ef" (UID: "9719dd82-cec9-4a56-ae93-29ccca75a3ef"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.119460 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9719dd82-cec9-4a56-ae93-29ccca75a3ef-kube-api-access-fhj74" (OuterVolumeSpecName: "kube-api-access-fhj74") pod "9719dd82-cec9-4a56-ae93-29ccca75a3ef" (UID: "9719dd82-cec9-4a56-ae93-29ccca75a3ef"). InnerVolumeSpecName "kube-api-access-fhj74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.119928 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-scripts" (OuterVolumeSpecName: "scripts") pod "9719dd82-cec9-4a56-ae93-29ccca75a3ef" (UID: "9719dd82-cec9-4a56-ae93-29ccca75a3ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.126425 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-kube-api-access-zf52h" (OuterVolumeSpecName: "kube-api-access-zf52h") pod "05e5e81a-aae4-486a-9804-d9b4b1cd74ee" (UID: "05e5e81a-aae4-486a-9804-d9b4b1cd74ee"). InnerVolumeSpecName "kube-api-access-zf52h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.138280 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9719dd82-cec9-4a56-ae93-29ccca75a3ef" (UID: "9719dd82-cec9-4a56-ae93-29ccca75a3ef"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.178452 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "05e5e81a-aae4-486a-9804-d9b4b1cd74ee" (UID: "05e5e81a-aae4-486a-9804-d9b4b1cd74ee"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.201002 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.201028 4870 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9719dd82-cec9-4a56-ae93-29ccca75a3ef-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.201040 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.201049 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf52h\" (UniqueName: \"kubernetes.io/projected/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-kube-api-access-zf52h\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.201058 4870 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.201067 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhj74\" (UniqueName: \"kubernetes.io/projected/9719dd82-cec9-4a56-ae93-29ccca75a3ef-kube-api-access-fhj74\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.201075 4870 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.224269 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9719dd82-cec9-4a56-ae93-29ccca75a3ef" (UID: "9719dd82-cec9-4a56-ae93-29ccca75a3ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.235148 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05e5e81a-aae4-486a-9804-d9b4b1cd74ee" (UID: "05e5e81a-aae4-486a-9804-d9b4b1cd74ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.235185 4870 generic.go:334] "Generic (PLEG): container finished" podID="b846df4e-a215-42b4-a15d-08eea2d03652" containerID="1498d0e1e74cabb6b0bc8a4d251685ed6a76e0a95d68f461e07ce9f1bbdbbd70" exitCode=0 Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.235284 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85848b6785-nrbf6" event={"ID":"b846df4e-a215-42b4-a15d-08eea2d03652","Type":"ContainerDied","Data":"1498d0e1e74cabb6b0bc8a4d251685ed6a76e0a95d68f461e07ce9f1bbdbbd70"} Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.271401 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4mwgd" event={"ID":"9719dd82-cec9-4a56-ae93-29ccca75a3ef","Type":"ContainerDied","Data":"f4c248383c60ec58d16a019495fba4b1aa73de9677bac50cfcd7b99e34cb3780"} Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.271697 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4c248383c60ec58d16a019495fba4b1aa73de9677bac50cfcd7b99e34cb3780" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.271899 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4mwgd" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.306033 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8754cf966-v85sw" event={"ID":"05e5e81a-aae4-486a-9804-d9b4b1cd74ee","Type":"ContainerDied","Data":"a420b445b449ba867e7dcc6e634410d90180925f668a645c8deedb5c7feda148"} Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.306125 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.306153 4870 scope.go:117] "RemoveContainer" containerID="fcf6688399f7079301517a4a3193ed53399ddc1927db4c994cda8d03ffa98319" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.306400 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8754cf966-v85sw" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.306944 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.307250 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.309283 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.349150 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-config-data" (OuterVolumeSpecName: "config-data") pod "05e5e81a-aae4-486a-9804-d9b4b1cd74ee" (UID: "05e5e81a-aae4-486a-9804-d9b4b1cd74ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.397599 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-config-data" (OuterVolumeSpecName: "config-data") pod "9719dd82-cec9-4a56-ae93-29ccca75a3ef" (UID: "9719dd82-cec9-4a56-ae93-29ccca75a3ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.411359 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9719dd82-cec9-4a56-ae93-29ccca75a3ef-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.411393 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05e5e81a-aae4-486a-9804-d9b4b1cd74ee-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.757151 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-8754cf966-v85sw"] Feb 16 17:21:23 crc kubenswrapper[4870]: I0216 17:21:23.779395 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-8754cf966-v85sw"] Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.281694 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05e5e81a-aae4-486a-9804-d9b4b1cd74ee" path="/var/lib/kubelet/pods/05e5e81a-aae4-486a-9804-d9b4b1cd74ee/volumes" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.306277 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 17:21:24 crc kubenswrapper[4870]: E0216 17:21:24.306780 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e5e81a-aae4-486a-9804-d9b4b1cd74ee" containerName="barbican-keystone-listener" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.306799 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e5e81a-aae4-486a-9804-d9b4b1cd74ee" containerName="barbican-keystone-listener" Feb 16 17:21:24 crc kubenswrapper[4870]: E0216 17:21:24.306831 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e5e81a-aae4-486a-9804-d9b4b1cd74ee" containerName="barbican-keystone-listener-log" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.306843 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e5e81a-aae4-486a-9804-d9b4b1cd74ee" containerName="barbican-keystone-listener-log" Feb 16 17:21:24 crc kubenswrapper[4870]: E0216 17:21:24.306864 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9719dd82-cec9-4a56-ae93-29ccca75a3ef" containerName="cinder-db-sync" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.306873 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="9719dd82-cec9-4a56-ae93-29ccca75a3ef" containerName="cinder-db-sync" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.316938 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="05e5e81a-aae4-486a-9804-d9b4b1cd74ee" containerName="barbican-keystone-listener-log" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.316998 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="05e5e81a-aae4-486a-9804-d9b4b1cd74ee" containerName="barbican-keystone-listener" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.317020 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="9719dd82-cec9-4a56-ae93-29ccca75a3ef" containerName="cinder-db-sync" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.318535 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.329570 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.329773 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.330430 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ggqxr" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.330581 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.348313 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.427016 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6hrp5"] Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.428780 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.441339 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6hrp5"] Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.459307 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85390668-2afa-4995-ada6-fe4a2b44afdb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.459346 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-config-data\") pod \"cinder-scheduler-0\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.459408 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.459476 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-scripts\") pod \"cinder-scheduler-0\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.459497 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdphz\" (UniqueName: \"kubernetes.io/projected/85390668-2afa-4995-ada6-fe4a2b44afdb-kube-api-access-tdphz\") pod \"cinder-scheduler-0\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.459517 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.543691 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.545596 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.548140 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.561723 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-6hrp5\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.561812 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7v7c\" (UniqueName: \"kubernetes.io/projected/3d29aa8d-0873-48ed-8f06-665b855a6037-kube-api-access-w7v7c\") pod \"dnsmasq-dns-5c9776ccc5-6hrp5\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.561841 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85390668-2afa-4995-ada6-fe4a2b44afdb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.561861 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-config-data\") pod \"cinder-scheduler-0\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.561887 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-6hrp5\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.561923 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.561976 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-6hrp5\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.562003 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-config\") pod \"dnsmasq-dns-5c9776ccc5-6hrp5\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.562025 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-6hrp5\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.562047 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-scripts\") pod \"cinder-scheduler-0\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.562067 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdphz\" (UniqueName: \"kubernetes.io/projected/85390668-2afa-4995-ada6-fe4a2b44afdb-kube-api-access-tdphz\") pod \"cinder-scheduler-0\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.562087 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.564137 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85390668-2afa-4995-ada6-fe4a2b44afdb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.569979 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.571440 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.573750 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-scripts\") pod \"cinder-scheduler-0\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.586057 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-config-data\") pod \"cinder-scheduler-0\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.612557 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.612744 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdphz\" (UniqueName: \"kubernetes.io/projected/85390668-2afa-4995-ada6-fe4a2b44afdb-kube-api-access-tdphz\") pod \"cinder-scheduler-0\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.663859 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-6hrp5\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.663976 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-config-data\") pod \"cinder-api-0\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.664008 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.664057 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f8f68af-963d-41b6-89a4-1448670e187e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.664079 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7v7c\" (UniqueName: \"kubernetes.io/projected/3d29aa8d-0873-48ed-8f06-665b855a6037-kube-api-access-w7v7c\") pod \"dnsmasq-dns-5c9776ccc5-6hrp5\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.664161 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-6hrp5\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.664228 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-scripts\") pod \"cinder-api-0\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.664257 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g627k\" (UniqueName: \"kubernetes.io/projected/2f8f68af-963d-41b6-89a4-1448670e187e-kube-api-access-g627k\") pod \"cinder-api-0\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.664273 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-config-data-custom\") pod \"cinder-api-0\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.664322 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-6hrp5\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.664346 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-config\") pod \"dnsmasq-dns-5c9776ccc5-6hrp5\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.664361 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f8f68af-963d-41b6-89a4-1448670e187e-logs\") pod \"cinder-api-0\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.664415 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-6hrp5\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.666535 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-6hrp5\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.670092 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-6hrp5\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.672477 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-6hrp5\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.672650 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-6hrp5\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.672793 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-config\") pod \"dnsmasq-dns-5c9776ccc5-6hrp5\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.684222 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7v7c\" (UniqueName: \"kubernetes.io/projected/3d29aa8d-0873-48ed-8f06-665b855a6037-kube-api-access-w7v7c\") pod \"dnsmasq-dns-5c9776ccc5-6hrp5\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.694545 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.766356 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-config-data\") pod \"cinder-api-0\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.766711 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.766760 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f8f68af-963d-41b6-89a4-1448670e187e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.766840 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-scripts\") pod \"cinder-api-0\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.766875 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g627k\" (UniqueName: \"kubernetes.io/projected/2f8f68af-963d-41b6-89a4-1448670e187e-kube-api-access-g627k\") pod \"cinder-api-0\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.766898 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-config-data-custom\") pod \"cinder-api-0\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.766953 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f8f68af-963d-41b6-89a4-1448670e187e-logs\") pod \"cinder-api-0\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.767028 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f8f68af-963d-41b6-89a4-1448670e187e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.771195 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f8f68af-963d-41b6-89a4-1448670e187e-logs\") pod \"cinder-api-0\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.775786 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-config-data\") pod \"cinder-api-0\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.776296 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.780486 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.786613 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g627k\" (UniqueName: \"kubernetes.io/projected/2f8f68af-963d-41b6-89a4-1448670e187e-kube-api-access-g627k\") pod \"cinder-api-0\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.787333 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-config-data-custom\") pod \"cinder-api-0\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.792281 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-scripts\") pod \"cinder-api-0\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " pod="openstack/cinder-api-0" Feb 16 17:21:24 crc kubenswrapper[4870]: I0216 17:21:24.869920 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 17:21:25 crc kubenswrapper[4870]: E0216 17:21:25.241273 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.305112 4870 scope.go:117] "RemoveContainer" containerID="6f8428e9aea9c98f86b3576addbfbfa204755323b1bebec662d597552e75a62d" Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.357345 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-89df85dc-88tt5" Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.410495 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-89df85dc-88tt5" event={"ID":"887feada-bbae-4e0a-bb20-a1e29b65cef9","Type":"ContainerDied","Data":"635a6fef2a1ddef0d349aaa375bfd9fb8ca1678252abd859a22b96eac903c2f2"} Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.410635 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-89df85dc-88tt5" Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.452246 4870 generic.go:334] "Generic (PLEG): container finished" podID="b846df4e-a215-42b4-a15d-08eea2d03652" containerID="e919b9cd5390cb83a421aa35ab78f2118fcebad544b96e741075fbd5400cce7d" exitCode=0 Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.452342 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85848b6785-nrbf6" event={"ID":"b846df4e-a215-42b4-a15d-08eea2d03652","Type":"ContainerDied","Data":"e919b9cd5390cb83a421aa35ab78f2118fcebad544b96e741075fbd5400cce7d"} Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.452441 4870 scope.go:117] "RemoveContainer" containerID="cd2ffb60255c8f180e84be9685a6e3b208fb82fbd18af1b1b3926143a89d0898" Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.485581 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887feada-bbae-4e0a-bb20-a1e29b65cef9-config-data\") pod \"887feada-bbae-4e0a-bb20-a1e29b65cef9\" (UID: \"887feada-bbae-4e0a-bb20-a1e29b65cef9\") " Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.485628 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/887feada-bbae-4e0a-bb20-a1e29b65cef9-config-data-custom\") pod \"887feada-bbae-4e0a-bb20-a1e29b65cef9\" (UID: \"887feada-bbae-4e0a-bb20-a1e29b65cef9\") " Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.485736 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/887feada-bbae-4e0a-bb20-a1e29b65cef9-logs\") pod \"887feada-bbae-4e0a-bb20-a1e29b65cef9\" (UID: \"887feada-bbae-4e0a-bb20-a1e29b65cef9\") " Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.485870 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887feada-bbae-4e0a-bb20-a1e29b65cef9-combined-ca-bundle\") pod \"887feada-bbae-4e0a-bb20-a1e29b65cef9\" (UID: \"887feada-bbae-4e0a-bb20-a1e29b65cef9\") " Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.485915 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhkl2\" (UniqueName: \"kubernetes.io/projected/887feada-bbae-4e0a-bb20-a1e29b65cef9-kube-api-access-nhkl2\") pod \"887feada-bbae-4e0a-bb20-a1e29b65cef9\" (UID: \"887feada-bbae-4e0a-bb20-a1e29b65cef9\") " Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.487479 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/887feada-bbae-4e0a-bb20-a1e29b65cef9-logs" (OuterVolumeSpecName: "logs") pod "887feada-bbae-4e0a-bb20-a1e29b65cef9" (UID: "887feada-bbae-4e0a-bb20-a1e29b65cef9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.491833 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887feada-bbae-4e0a-bb20-a1e29b65cef9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "887feada-bbae-4e0a-bb20-a1e29b65cef9" (UID: "887feada-bbae-4e0a-bb20-a1e29b65cef9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.501200 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/887feada-bbae-4e0a-bb20-a1e29b65cef9-kube-api-access-nhkl2" (OuterVolumeSpecName: "kube-api-access-nhkl2") pod "887feada-bbae-4e0a-bb20-a1e29b65cef9" (UID: "887feada-bbae-4e0a-bb20-a1e29b65cef9"). InnerVolumeSpecName "kube-api-access-nhkl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.522162 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887feada-bbae-4e0a-bb20-a1e29b65cef9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "887feada-bbae-4e0a-bb20-a1e29b65cef9" (UID: "887feada-bbae-4e0a-bb20-a1e29b65cef9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.589590 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887feada-bbae-4e0a-bb20-a1e29b65cef9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.589622 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhkl2\" (UniqueName: \"kubernetes.io/projected/887feada-bbae-4e0a-bb20-a1e29b65cef9-kube-api-access-nhkl2\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.589635 4870 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/887feada-bbae-4e0a-bb20-a1e29b65cef9-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.589662 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/887feada-bbae-4e0a-bb20-a1e29b65cef9-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.602905 4870 scope.go:117] "RemoveContainer" containerID="906a578fde2273c1d7c29e827e12a3da4b2a49bff2a17618e04dc44ca6d52696" Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.825111 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887feada-bbae-4e0a-bb20-a1e29b65cef9-config-data" (OuterVolumeSpecName: "config-data") pod "887feada-bbae-4e0a-bb20-a1e29b65cef9" (UID: "887feada-bbae-4e0a-bb20-a1e29b65cef9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:25 crc kubenswrapper[4870]: I0216 17:21:25.904422 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887feada-bbae-4e0a-bb20-a1e29b65cef9-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.367186 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.523292 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-dns-svc\") pod \"b8e4be99-05cc-436e-9634-b6302dc49fa5\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.523389 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5269z\" (UniqueName: \"kubernetes.io/projected/b8e4be99-05cc-436e-9634-b6302dc49fa5-kube-api-access-5269z\") pod \"b8e4be99-05cc-436e-9634-b6302dc49fa5\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.523541 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-config\") pod \"b8e4be99-05cc-436e-9634-b6302dc49fa5\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.523586 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-dns-swift-storage-0\") pod \"b8e4be99-05cc-436e-9634-b6302dc49fa5\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.523618 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-ovsdbserver-nb\") pod \"b8e4be99-05cc-436e-9634-b6302dc49fa5\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.523643 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-ovsdbserver-sb\") pod \"b8e4be99-05cc-436e-9634-b6302dc49fa5\" (UID: \"b8e4be99-05cc-436e-9634-b6302dc49fa5\") " Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.534218 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8e4be99-05cc-436e-9634-b6302dc49fa5-kube-api-access-5269z" (OuterVolumeSpecName: "kube-api-access-5269z") pod "b8e4be99-05cc-436e-9634-b6302dc49fa5" (UID: "b8e4be99-05cc-436e-9634-b6302dc49fa5"). InnerVolumeSpecName "kube-api-access-5269z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:26 crc kubenswrapper[4870]: E0216 17:21:26.534721 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="38836e81-1b99-4b50-ada2-40727db1f248" Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.567613 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-9kr25" event={"ID":"b8e4be99-05cc-436e-9634-b6302dc49fa5","Type":"ContainerDied","Data":"9442dbe80aae68e00f21c0039010eb83f993b893c22fcfddc25cbfa6c4634f56"} Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.567681 4870 scope.go:117] "RemoveContainer" containerID="2debca4bf0415947ae6bcc801479db9bfed32100a26e90a6a83ee654f99cc8a2" Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.567868 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-9kr25" Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.632393 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5269z\" (UniqueName: \"kubernetes.io/projected/b8e4be99-05cc-436e-9634-b6302dc49fa5-kube-api-access-5269z\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.643766 4870 scope.go:117] "RemoveContainer" containerID="e31bce6764ed30ed827af4bd809dcccff0614cace4afe5ae05286628706fe31c" Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.671435 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b8e4be99-05cc-436e-9634-b6302dc49fa5" (UID: "b8e4be99-05cc-436e-9634-b6302dc49fa5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.690462 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b8e4be99-05cc-436e-9634-b6302dc49fa5" (UID: "b8e4be99-05cc-436e-9634-b6302dc49fa5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.734615 4870 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.736164 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.756298 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.798445 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-config" (OuterVolumeSpecName: "config") pod "b8e4be99-05cc-436e-9634-b6302dc49fa5" (UID: "b8e4be99-05cc-436e-9634-b6302dc49fa5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.838865 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.842467 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b8e4be99-05cc-436e-9634-b6302dc49fa5" (UID: "b8e4be99-05cc-436e-9634-b6302dc49fa5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.842890 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-958875f6b-md5pd" Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.844051 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b8e4be99-05cc-436e-9634-b6302dc49fa5" (UID: "b8e4be99-05cc-436e-9634-b6302dc49fa5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.948155 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:26 crc kubenswrapper[4870]: I0216 17:21:26.948187 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8e4be99-05cc-436e-9634-b6302dc49fa5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.002331 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-756987b5cd-6brc9"] Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.002623 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-756987b5cd-6brc9" podUID="5faea269-54ff-4f1f-933c-e16bf517fa14" containerName="barbican-api-log" containerID="cri-o://7af4892aca934c251eb05093349732bec393c752388ebc4ceddfc05e111a4bb7" gracePeriod=30 Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.002937 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-756987b5cd-6brc9" podUID="5faea269-54ff-4f1f-933c-e16bf517fa14" containerName="barbican-api" containerID="cri-o://8fa18c9a114f0fb75e3caefb132c456e1bf47ee4a9a78ceed4c73727ffddda1a" gracePeriod=30 Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.027238 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-756987b5cd-6brc9" podUID="5faea269-54ff-4f1f-933c-e16bf517fa14" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": EOF" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.028050 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-756987b5cd-6brc9" podUID="5faea269-54ff-4f1f-933c-e16bf517fa14" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": EOF" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.078170 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-9kr25"] Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.112329 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-9kr25"] Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.271482 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.327923 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.364097 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-internal-tls-certs\") pod \"b846df4e-a215-42b4-a15d-08eea2d03652\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.364183 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-httpd-config\") pod \"b846df4e-a215-42b4-a15d-08eea2d03652\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.364219 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-public-tls-certs\") pod \"b846df4e-a215-42b4-a15d-08eea2d03652\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.364293 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rts4r\" (UniqueName: \"kubernetes.io/projected/b846df4e-a215-42b4-a15d-08eea2d03652-kube-api-access-rts4r\") pod \"b846df4e-a215-42b4-a15d-08eea2d03652\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.364382 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-config\") pod \"b846df4e-a215-42b4-a15d-08eea2d03652\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.364421 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-ovndb-tls-certs\") pod \"b846df4e-a215-42b4-a15d-08eea2d03652\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.364506 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-combined-ca-bundle\") pod \"b846df4e-a215-42b4-a15d-08eea2d03652\" (UID: \"b846df4e-a215-42b4-a15d-08eea2d03652\") " Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.404641 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b846df4e-a215-42b4-a15d-08eea2d03652-kube-api-access-rts4r" (OuterVolumeSpecName: "kube-api-access-rts4r") pod "b846df4e-a215-42b4-a15d-08eea2d03652" (UID: "b846df4e-a215-42b4-a15d-08eea2d03652"). InnerVolumeSpecName "kube-api-access-rts4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.406939 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b846df4e-a215-42b4-a15d-08eea2d03652" (UID: "b846df4e-a215-42b4-a15d-08eea2d03652"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.470254 4870 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.470290 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rts4r\" (UniqueName: \"kubernetes.io/projected/b846df4e-a215-42b4-a15d-08eea2d03652-kube-api-access-rts4r\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.484861 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67bf48b897-78ftj"] Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.510100 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6hrp5"] Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.577444 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.590386 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" event={"ID":"3d29aa8d-0873-48ed-8f06-665b855a6037","Type":"ContainerStarted","Data":"d422f9aaefb9701f351ff281fc01eee9e7c559bc275f4c75944ad959c2221586"} Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.633180 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85848b6785-nrbf6" event={"ID":"b846df4e-a215-42b4-a15d-08eea2d03652","Type":"ContainerDied","Data":"7472706d949b4d1b14cd2bd9acdca1eebb350c6c417774d20b49b4e0cd24a9de"} Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.633500 4870 scope.go:117] "RemoveContainer" containerID="1498d0e1e74cabb6b0bc8a4d251685ed6a76e0a95d68f461e07ce9f1bbdbbd70" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.633631 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85848b6785-nrbf6" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.646964 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b846df4e-a215-42b4-a15d-08eea2d03652" (UID: "b846df4e-a215-42b4-a15d-08eea2d03652"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.678341 4870 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.679392 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.681319 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38836e81-1b99-4b50-ada2-40727db1f248","Type":"ContainerStarted","Data":"16d932a2042e38071a98c8a86343f68eccb2f9335bc68d3d2637c67eeb82e662"} Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.686403 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.681268 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38836e81-1b99-4b50-ada2-40727db1f248" containerName="sg-core" containerID="cri-o://d02b2381bf8f2683de03b2ccdd3ce10b27ef7cdee07bd8ee3818ba5a1749d450" gracePeriod=30 Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.680840 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38836e81-1b99-4b50-ada2-40727db1f248" containerName="ceilometer-notification-agent" containerID="cri-o://30d9110ca712c5906cf63cfc54c4cbba0cc83abca2a5553851f9284e795acfb0" gracePeriod=30 Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.686343 4870 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.681251 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38836e81-1b99-4b50-ada2-40727db1f248" containerName="proxy-httpd" containerID="cri-o://16d932a2042e38071a98c8a86343f68eccb2f9335bc68d3d2637c67eeb82e662" gracePeriod=30 Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.750143 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-config" (OuterVolumeSpecName: "config") pod "b846df4e-a215-42b4-a15d-08eea2d03652" (UID: "b846df4e-a215-42b4-a15d-08eea2d03652"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.750636 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85390668-2afa-4995-ada6-fe4a2b44afdb","Type":"ContainerStarted","Data":"8762aaf83b80c650646d00d537b7824a638f0a0e2d75c844fd776daf6306bec2"} Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.793781 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.795202 4870 scope.go:117] "RemoveContainer" containerID="e919b9cd5390cb83a421aa35ab78f2118fcebad544b96e741075fbd5400cce7d" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.822473 4870 generic.go:334] "Generic (PLEG): container finished" podID="5faea269-54ff-4f1f-933c-e16bf517fa14" containerID="7af4892aca934c251eb05093349732bec393c752388ebc4ceddfc05e111a4bb7" exitCode=143 Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.822529 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-756987b5cd-6brc9" event={"ID":"5faea269-54ff-4f1f-933c-e16bf517fa14","Type":"ContainerDied","Data":"7af4892aca934c251eb05093349732bec393c752388ebc4ceddfc05e111a4bb7"} Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.848262 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67bf48b897-78ftj" event={"ID":"3ec0f9b7-8e31-4b80-bb3b-5245632bc524","Type":"ContainerStarted","Data":"a2d26285f03c5f88efcbd6aa4d57aa30c05de32c5e116fbcd495339959abb86c"} Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.851966 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b846df4e-a215-42b4-a15d-08eea2d03652" (UID: "b846df4e-a215-42b4-a15d-08eea2d03652"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.869713 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2f8f68af-963d-41b6-89a4-1448670e187e","Type":"ContainerStarted","Data":"195eb0cf50319ef4db4e5a4d936223859ff01977a246fea6bc7ece908915462e"} Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.896034 4870 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.938275 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b846df4e-a215-42b4-a15d-08eea2d03652" (UID: "b846df4e-a215-42b4-a15d-08eea2d03652"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.979112 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b846df4e-a215-42b4-a15d-08eea2d03652" (UID: "b846df4e-a215-42b4-a15d-08eea2d03652"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.998736 4870 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:27 crc kubenswrapper[4870]: I0216 17:21:27.998778 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b846df4e-a215-42b4-a15d-08eea2d03652-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:28 crc kubenswrapper[4870]: I0216 17:21:28.304431 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8e4be99-05cc-436e-9634-b6302dc49fa5" path="/var/lib/kubelet/pods/b8e4be99-05cc-436e-9634-b6302dc49fa5/volumes" Feb 16 17:21:28 crc kubenswrapper[4870]: I0216 17:21:28.306279 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 17:21:28 crc kubenswrapper[4870]: I0216 17:21:28.494140 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-85848b6785-nrbf6"] Feb 16 17:21:28 crc kubenswrapper[4870]: I0216 17:21:28.520570 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-85848b6785-nrbf6"] Feb 16 17:21:28 crc kubenswrapper[4870]: I0216 17:21:28.887528 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 17:21:28 crc kubenswrapper[4870]: I0216 17:21:28.925523 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2f8f68af-963d-41b6-89a4-1448670e187e","Type":"ContainerStarted","Data":"ef178f4ef20b90d82b9caa259bec736cfac006315173943090567acb2c7d4906"} Feb 16 17:21:28 crc kubenswrapper[4870]: I0216 17:21:28.928045 4870 generic.go:334] "Generic (PLEG): container finished" podID="3d29aa8d-0873-48ed-8f06-665b855a6037" containerID="ffc0513f7e308113478d014509a21f823897f92b8a18ae61519188507558b19f" exitCode=0 Feb 16 17:21:28 crc kubenswrapper[4870]: I0216 17:21:28.928089 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" event={"ID":"3d29aa8d-0873-48ed-8f06-665b855a6037","Type":"ContainerDied","Data":"ffc0513f7e308113478d014509a21f823897f92b8a18ae61519188507558b19f"} Feb 16 17:21:28 crc kubenswrapper[4870]: I0216 17:21:28.940082 4870 generic.go:334] "Generic (PLEG): container finished" podID="38836e81-1b99-4b50-ada2-40727db1f248" containerID="16d932a2042e38071a98c8a86343f68eccb2f9335bc68d3d2637c67eeb82e662" exitCode=0 Feb 16 17:21:28 crc kubenswrapper[4870]: I0216 17:21:28.940111 4870 generic.go:334] "Generic (PLEG): container finished" podID="38836e81-1b99-4b50-ada2-40727db1f248" containerID="d02b2381bf8f2683de03b2ccdd3ce10b27ef7cdee07bd8ee3818ba5a1749d450" exitCode=2 Feb 16 17:21:28 crc kubenswrapper[4870]: I0216 17:21:28.940144 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38836e81-1b99-4b50-ada2-40727db1f248","Type":"ContainerDied","Data":"16d932a2042e38071a98c8a86343f68eccb2f9335bc68d3d2637c67eeb82e662"} Feb 16 17:21:28 crc kubenswrapper[4870]: I0216 17:21:28.940166 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38836e81-1b99-4b50-ada2-40727db1f248","Type":"ContainerDied","Data":"d02b2381bf8f2683de03b2ccdd3ce10b27ef7cdee07bd8ee3818ba5a1749d450"} Feb 16 17:21:28 crc kubenswrapper[4870]: I0216 17:21:28.942118 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67bf48b897-78ftj" event={"ID":"3ec0f9b7-8e31-4b80-bb3b-5245632bc524","Type":"ContainerStarted","Data":"cd01ab5ad7896574f4f939ade7375da325f57202e73e64d3b46ca9f0d3a58b48"} Feb 16 17:21:29 crc kubenswrapper[4870]: I0216 17:21:29.975856 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85390668-2afa-4995-ada6-fe4a2b44afdb","Type":"ContainerStarted","Data":"dcc421c124679e744a63c615e3604536c74c1e2533ed2dfa960d5465aa26b09c"} Feb 16 17:21:29 crc kubenswrapper[4870]: I0216 17:21:29.979784 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67bf48b897-78ftj" event={"ID":"3ec0f9b7-8e31-4b80-bb3b-5245632bc524","Type":"ContainerStarted","Data":"e3140ce77074bf171f7260fb632ee0f5c374062919deef0456f38481784fed72"} Feb 16 17:21:29 crc kubenswrapper[4870]: I0216 17:21:29.980939 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:29 crc kubenswrapper[4870]: I0216 17:21:29.986906 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2f8f68af-963d-41b6-89a4-1448670e187e","Type":"ContainerStarted","Data":"a841874dc8c9d028f0a34a64cc96f8825fb763b28ba2f94eb9acf96947519388"} Feb 16 17:21:29 crc kubenswrapper[4870]: I0216 17:21:29.987013 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 17:21:29 crc kubenswrapper[4870]: I0216 17:21:29.987024 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2f8f68af-963d-41b6-89a4-1448670e187e" containerName="cinder-api-log" containerID="cri-o://ef178f4ef20b90d82b9caa259bec736cfac006315173943090567acb2c7d4906" gracePeriod=30 Feb 16 17:21:29 crc kubenswrapper[4870]: I0216 17:21:29.987067 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2f8f68af-963d-41b6-89a4-1448670e187e" containerName="cinder-api" containerID="cri-o://a841874dc8c9d028f0a34a64cc96f8825fb763b28ba2f94eb9acf96947519388" gracePeriod=30 Feb 16 17:21:29 crc kubenswrapper[4870]: I0216 17:21:29.994098 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" event={"ID":"3d29aa8d-0873-48ed-8f06-665b855a6037","Type":"ContainerStarted","Data":"f8638965369202d74398f6a4841bc97d6a6050afeabe91a1292bf05e9b7ff318"} Feb 16 17:21:29 crc kubenswrapper[4870]: I0216 17:21:29.994375 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.004694 4870 generic.go:334] "Generic (PLEG): container finished" podID="38836e81-1b99-4b50-ada2-40727db1f248" containerID="30d9110ca712c5906cf63cfc54c4cbba0cc83abca2a5553851f9284e795acfb0" exitCode=0 Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.004733 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38836e81-1b99-4b50-ada2-40727db1f248","Type":"ContainerDied","Data":"30d9110ca712c5906cf63cfc54c4cbba0cc83abca2a5553851f9284e795acfb0"} Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.074681 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-67bf48b897-78ftj" podStartSLOduration=8.074662616 podStartE2EDuration="8.074662616s" podCreationTimestamp="2026-02-16 17:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:30.017015772 +0000 UTC m=+1294.500480176" watchObservedRunningTime="2026-02-16 17:21:30.074662616 +0000 UTC m=+1294.558127000" Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.077517 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" podStartSLOduration=6.077502967 podStartE2EDuration="6.077502967s" podCreationTimestamp="2026-02-16 17:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:30.073093471 +0000 UTC m=+1294.556557855" watchObservedRunningTime="2026-02-16 17:21:30.077502967 +0000 UTC m=+1294.560967351" Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.117771 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.117750655 podStartE2EDuration="6.117750655s" podCreationTimestamp="2026-02-16 17:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:30.11439908 +0000 UTC m=+1294.597863464" watchObservedRunningTime="2026-02-16 17:21:30.117750655 +0000 UTC m=+1294.601215039" Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.255228 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b846df4e-a215-42b4-a15d-08eea2d03652" path="/var/lib/kubelet/pods/b846df4e-a215-42b4-a15d-08eea2d03652/volumes" Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.266614 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.376928 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-scripts\") pod \"38836e81-1b99-4b50-ada2-40727db1f248\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.377011 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38836e81-1b99-4b50-ada2-40727db1f248-log-httpd\") pod \"38836e81-1b99-4b50-ada2-40727db1f248\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.377088 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnjmz\" (UniqueName: \"kubernetes.io/projected/38836e81-1b99-4b50-ada2-40727db1f248-kube-api-access-nnjmz\") pod \"38836e81-1b99-4b50-ada2-40727db1f248\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.377160 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38836e81-1b99-4b50-ada2-40727db1f248-run-httpd\") pod \"38836e81-1b99-4b50-ada2-40727db1f248\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.377272 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-sg-core-conf-yaml\") pod \"38836e81-1b99-4b50-ada2-40727db1f248\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.377353 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-config-data\") pod \"38836e81-1b99-4b50-ada2-40727db1f248\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.377400 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-combined-ca-bundle\") pod \"38836e81-1b99-4b50-ada2-40727db1f248\" (UID: \"38836e81-1b99-4b50-ada2-40727db1f248\") " Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.384327 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38836e81-1b99-4b50-ada2-40727db1f248-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "38836e81-1b99-4b50-ada2-40727db1f248" (UID: "38836e81-1b99-4b50-ada2-40727db1f248"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.385615 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38836e81-1b99-4b50-ada2-40727db1f248-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "38836e81-1b99-4b50-ada2-40727db1f248" (UID: "38836e81-1b99-4b50-ada2-40727db1f248"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.393081 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-scripts" (OuterVolumeSpecName: "scripts") pod "38836e81-1b99-4b50-ada2-40727db1f248" (UID: "38836e81-1b99-4b50-ada2-40727db1f248"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.415112 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38836e81-1b99-4b50-ada2-40727db1f248-kube-api-access-nnjmz" (OuterVolumeSpecName: "kube-api-access-nnjmz") pod "38836e81-1b99-4b50-ada2-40727db1f248" (UID: "38836e81-1b99-4b50-ada2-40727db1f248"). InnerVolumeSpecName "kube-api-access-nnjmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.483286 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.483317 4870 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38836e81-1b99-4b50-ada2-40727db1f248-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.483327 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnjmz\" (UniqueName: \"kubernetes.io/projected/38836e81-1b99-4b50-ada2-40727db1f248-kube-api-access-nnjmz\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.483337 4870 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38836e81-1b99-4b50-ada2-40727db1f248-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.505132 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "38836e81-1b99-4b50-ada2-40727db1f248" (UID: "38836e81-1b99-4b50-ada2-40727db1f248"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.524111 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38836e81-1b99-4b50-ada2-40727db1f248" (UID: "38836e81-1b99-4b50-ada2-40727db1f248"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.555601 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-config-data" (OuterVolumeSpecName: "config-data") pod "38836e81-1b99-4b50-ada2-40727db1f248" (UID: "38836e81-1b99-4b50-ada2-40727db1f248"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.585497 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.585777 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:30 crc kubenswrapper[4870]: I0216 17:21:30.585866 4870 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38836e81-1b99-4b50-ada2-40727db1f248-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.018677 4870 generic.go:334] "Generic (PLEG): container finished" podID="2f8f68af-963d-41b6-89a4-1448670e187e" containerID="ef178f4ef20b90d82b9caa259bec736cfac006315173943090567acb2c7d4906" exitCode=143 Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.018762 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2f8f68af-963d-41b6-89a4-1448670e187e","Type":"ContainerDied","Data":"ef178f4ef20b90d82b9caa259bec736cfac006315173943090567acb2c7d4906"} Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.028487 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38836e81-1b99-4b50-ada2-40727db1f248","Type":"ContainerDied","Data":"e12bfe26cc023fa1a43c28d86e2efe388cb80db9a8d047be6d78d74d39f21fe4"} Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.028548 4870 scope.go:117] "RemoveContainer" containerID="16d932a2042e38071a98c8a86343f68eccb2f9335bc68d3d2637c67eeb82e662" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.028753 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.037243 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85390668-2afa-4995-ada6-fe4a2b44afdb","Type":"ContainerStarted","Data":"59bf16c6f7af67f78064ae9685764054d0a9e931c6a8a18de79002aac9a01529"} Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.067150 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.002875899 podStartE2EDuration="7.067118304s" podCreationTimestamp="2026-02-16 17:21:24 +0000 UTC" firstStartedPulling="2026-02-16 17:21:27.60306715 +0000 UTC m=+1292.086531534" lastFinishedPulling="2026-02-16 17:21:28.667309555 +0000 UTC m=+1293.150773939" observedRunningTime="2026-02-16 17:21:31.064555011 +0000 UTC m=+1295.548019415" watchObservedRunningTime="2026-02-16 17:21:31.067118304 +0000 UTC m=+1295.550582688" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.095834 4870 scope.go:117] "RemoveContainer" containerID="d02b2381bf8f2683de03b2ccdd3ce10b27ef7cdee07bd8ee3818ba5a1749d450" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.126037 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.142045 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.150159 4870 scope.go:117] "RemoveContainer" containerID="30d9110ca712c5906cf63cfc54c4cbba0cc83abca2a5553851f9284e795acfb0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.154073 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:21:31 crc kubenswrapper[4870]: E0216 17:21:31.154461 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e4be99-05cc-436e-9634-b6302dc49fa5" containerName="dnsmasq-dns" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.154475 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e4be99-05cc-436e-9634-b6302dc49fa5" containerName="dnsmasq-dns" Feb 16 17:21:31 crc kubenswrapper[4870]: E0216 17:21:31.154488 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e4be99-05cc-436e-9634-b6302dc49fa5" containerName="init" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.154494 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e4be99-05cc-436e-9634-b6302dc49fa5" containerName="init" Feb 16 17:21:31 crc kubenswrapper[4870]: E0216 17:21:31.154505 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="887feada-bbae-4e0a-bb20-a1e29b65cef9" containerName="barbican-worker-log" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.154511 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="887feada-bbae-4e0a-bb20-a1e29b65cef9" containerName="barbican-worker-log" Feb 16 17:21:31 crc kubenswrapper[4870]: E0216 17:21:31.154517 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38836e81-1b99-4b50-ada2-40727db1f248" containerName="ceilometer-notification-agent" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.154523 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="38836e81-1b99-4b50-ada2-40727db1f248" containerName="ceilometer-notification-agent" Feb 16 17:21:31 crc kubenswrapper[4870]: E0216 17:21:31.154533 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38836e81-1b99-4b50-ada2-40727db1f248" containerName="sg-core" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.154539 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="38836e81-1b99-4b50-ada2-40727db1f248" containerName="sg-core" Feb 16 17:21:31 crc kubenswrapper[4870]: E0216 17:21:31.154563 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="887feada-bbae-4e0a-bb20-a1e29b65cef9" containerName="barbican-worker" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.154568 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="887feada-bbae-4e0a-bb20-a1e29b65cef9" containerName="barbican-worker" Feb 16 17:21:31 crc kubenswrapper[4870]: E0216 17:21:31.154577 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b846df4e-a215-42b4-a15d-08eea2d03652" containerName="neutron-httpd" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.154583 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b846df4e-a215-42b4-a15d-08eea2d03652" containerName="neutron-httpd" Feb 16 17:21:31 crc kubenswrapper[4870]: E0216 17:21:31.154596 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38836e81-1b99-4b50-ada2-40727db1f248" containerName="proxy-httpd" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.154601 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="38836e81-1b99-4b50-ada2-40727db1f248" containerName="proxy-httpd" Feb 16 17:21:31 crc kubenswrapper[4870]: E0216 17:21:31.154613 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b846df4e-a215-42b4-a15d-08eea2d03652" containerName="neutron-api" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.154618 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b846df4e-a215-42b4-a15d-08eea2d03652" containerName="neutron-api" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.154778 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="887feada-bbae-4e0a-bb20-a1e29b65cef9" containerName="barbican-worker-log" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.154795 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8e4be99-05cc-436e-9634-b6302dc49fa5" containerName="dnsmasq-dns" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.154805 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="887feada-bbae-4e0a-bb20-a1e29b65cef9" containerName="barbican-worker" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.154810 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="38836e81-1b99-4b50-ada2-40727db1f248" containerName="ceilometer-notification-agent" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.154823 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="38836e81-1b99-4b50-ada2-40727db1f248" containerName="sg-core" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.154832 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="38836e81-1b99-4b50-ada2-40727db1f248" containerName="proxy-httpd" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.154843 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="b846df4e-a215-42b4-a15d-08eea2d03652" containerName="neutron-api" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.154850 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="b846df4e-a215-42b4-a15d-08eea2d03652" containerName="neutron-httpd" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.156656 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.160261 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.160453 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.166266 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.299455 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-scripts\") pod \"ceilometer-0\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.299523 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee3935ab-edfc-4f1d-ac06-32e332389334-log-httpd\") pod \"ceilometer-0\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.299688 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.299732 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.299822 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-config-data\") pod \"ceilometer-0\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.299852 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfznw\" (UniqueName: \"kubernetes.io/projected/ee3935ab-edfc-4f1d-ac06-32e332389334-kube-api-access-hfznw\") pod \"ceilometer-0\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.299896 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee3935ab-edfc-4f1d-ac06-32e332389334-run-httpd\") pod \"ceilometer-0\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.402031 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.402333 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.402638 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-config-data\") pod \"ceilometer-0\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.402696 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfznw\" (UniqueName: \"kubernetes.io/projected/ee3935ab-edfc-4f1d-ac06-32e332389334-kube-api-access-hfznw\") pod \"ceilometer-0\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.402784 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee3935ab-edfc-4f1d-ac06-32e332389334-run-httpd\") pod \"ceilometer-0\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.402970 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-scripts\") pod \"ceilometer-0\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.403031 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee3935ab-edfc-4f1d-ac06-32e332389334-log-httpd\") pod \"ceilometer-0\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.404363 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee3935ab-edfc-4f1d-ac06-32e332389334-log-httpd\") pod \"ceilometer-0\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.404504 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee3935ab-edfc-4f1d-ac06-32e332389334-run-httpd\") pod \"ceilometer-0\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.407675 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.407875 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-config-data\") pod \"ceilometer-0\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.409386 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-scripts\") pod \"ceilometer-0\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.411815 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.443907 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfznw\" (UniqueName: \"kubernetes.io/projected/ee3935ab-edfc-4f1d-ac06-32e332389334-kube-api-access-hfznw\") pod \"ceilometer-0\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " pod="openstack/ceilometer-0" Feb 16 17:21:31 crc kubenswrapper[4870]: I0216 17:21:31.490491 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:21:33 crc kubenswrapper[4870]: I0216 17:21:33.542830 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-756987b5cd-6brc9" podUID="5faea269-54ff-4f1f-933c-e16bf517fa14" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": read tcp 10.217.0.2:43046->10.217.0.179:9311: read: connection reset by peer" Feb 16 17:21:33 crc kubenswrapper[4870]: I0216 17:21:33.542941 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-756987b5cd-6brc9" podUID="5faea269-54ff-4f1f-933c-e16bf517fa14" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": read tcp 10.217.0.2:43034->10.217.0.179:9311: read: connection reset by peer" Feb 16 17:21:33 crc kubenswrapper[4870]: I0216 17:21:33.563157 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38836e81-1b99-4b50-ada2-40727db1f248" path="/var/lib/kubelet/pods/38836e81-1b99-4b50-ada2-40727db1f248/volumes" Feb 16 17:21:33 crc kubenswrapper[4870]: I0216 17:21:33.595400 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.162000 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.252775 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5faea269-54ff-4f1f-933c-e16bf517fa14-combined-ca-bundle\") pod \"5faea269-54ff-4f1f-933c-e16bf517fa14\" (UID: \"5faea269-54ff-4f1f-933c-e16bf517fa14\") " Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.252846 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5faea269-54ff-4f1f-933c-e16bf517fa14-logs\") pod \"5faea269-54ff-4f1f-933c-e16bf517fa14\" (UID: \"5faea269-54ff-4f1f-933c-e16bf517fa14\") " Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.252930 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5faea269-54ff-4f1f-933c-e16bf517fa14-config-data\") pod \"5faea269-54ff-4f1f-933c-e16bf517fa14\" (UID: \"5faea269-54ff-4f1f-933c-e16bf517fa14\") " Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.253052 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws6xd\" (UniqueName: \"kubernetes.io/projected/5faea269-54ff-4f1f-933c-e16bf517fa14-kube-api-access-ws6xd\") pod \"5faea269-54ff-4f1f-933c-e16bf517fa14\" (UID: \"5faea269-54ff-4f1f-933c-e16bf517fa14\") " Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.253085 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5faea269-54ff-4f1f-933c-e16bf517fa14-config-data-custom\") pod \"5faea269-54ff-4f1f-933c-e16bf517fa14\" (UID: \"5faea269-54ff-4f1f-933c-e16bf517fa14\") " Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.253480 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5faea269-54ff-4f1f-933c-e16bf517fa14-logs" (OuterVolumeSpecName: "logs") pod "5faea269-54ff-4f1f-933c-e16bf517fa14" (UID: "5faea269-54ff-4f1f-933c-e16bf517fa14"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.257160 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5faea269-54ff-4f1f-933c-e16bf517fa14-kube-api-access-ws6xd" (OuterVolumeSpecName: "kube-api-access-ws6xd") pod "5faea269-54ff-4f1f-933c-e16bf517fa14" (UID: "5faea269-54ff-4f1f-933c-e16bf517fa14"). InnerVolumeSpecName "kube-api-access-ws6xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.263897 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faea269-54ff-4f1f-933c-e16bf517fa14-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5faea269-54ff-4f1f-933c-e16bf517fa14" (UID: "5faea269-54ff-4f1f-933c-e16bf517fa14"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.281278 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faea269-54ff-4f1f-933c-e16bf517fa14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5faea269-54ff-4f1f-933c-e16bf517fa14" (UID: "5faea269-54ff-4f1f-933c-e16bf517fa14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.312333 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faea269-54ff-4f1f-933c-e16bf517fa14-config-data" (OuterVolumeSpecName: "config-data") pod "5faea269-54ff-4f1f-933c-e16bf517fa14" (UID: "5faea269-54ff-4f1f-933c-e16bf517fa14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.358615 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws6xd\" (UniqueName: \"kubernetes.io/projected/5faea269-54ff-4f1f-933c-e16bf517fa14-kube-api-access-ws6xd\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.359076 4870 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5faea269-54ff-4f1f-933c-e16bf517fa14-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.359192 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5faea269-54ff-4f1f-933c-e16bf517fa14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.359271 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5faea269-54ff-4f1f-933c-e16bf517fa14-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.359357 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5faea269-54ff-4f1f-933c-e16bf517fa14-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.608646 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee3935ab-edfc-4f1d-ac06-32e332389334","Type":"ContainerStarted","Data":"6a6b50ebfcb8160381df1fdb499884d07874760845a3b7b2641d7b0d204b330b"} Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.608695 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee3935ab-edfc-4f1d-ac06-32e332389334","Type":"ContainerStarted","Data":"aa9439519d0995bd2925ed5ed08eba7823de537ba848fc6ebe415824b713bf59"} Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.611692 4870 generic.go:334] "Generic (PLEG): container finished" podID="5faea269-54ff-4f1f-933c-e16bf517fa14" containerID="8fa18c9a114f0fb75e3caefb132c456e1bf47ee4a9a78ceed4c73727ffddda1a" exitCode=0 Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.611769 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-756987b5cd-6brc9" event={"ID":"5faea269-54ff-4f1f-933c-e16bf517fa14","Type":"ContainerDied","Data":"8fa18c9a114f0fb75e3caefb132c456e1bf47ee4a9a78ceed4c73727ffddda1a"} Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.611792 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-756987b5cd-6brc9" Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.611803 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-756987b5cd-6brc9" event={"ID":"5faea269-54ff-4f1f-933c-e16bf517fa14","Type":"ContainerDied","Data":"07c7c1fe7509203629102e7346e32ac3306d113b2e6f98cc1c173ab11c816785"} Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.611821 4870 scope.go:117] "RemoveContainer" containerID="8fa18c9a114f0fb75e3caefb132c456e1bf47ee4a9a78ceed4c73727ffddda1a" Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.648163 4870 scope.go:117] "RemoveContainer" containerID="7af4892aca934c251eb05093349732bec393c752388ebc4ceddfc05e111a4bb7" Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.651018 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-756987b5cd-6brc9"] Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.659076 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-756987b5cd-6brc9"] Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.674214 4870 scope.go:117] "RemoveContainer" containerID="8fa18c9a114f0fb75e3caefb132c456e1bf47ee4a9a78ceed4c73727ffddda1a" Feb 16 17:21:34 crc kubenswrapper[4870]: E0216 17:21:34.674826 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fa18c9a114f0fb75e3caefb132c456e1bf47ee4a9a78ceed4c73727ffddda1a\": container with ID starting with 8fa18c9a114f0fb75e3caefb132c456e1bf47ee4a9a78ceed4c73727ffddda1a not found: ID does not exist" containerID="8fa18c9a114f0fb75e3caefb132c456e1bf47ee4a9a78ceed4c73727ffddda1a" Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.674876 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fa18c9a114f0fb75e3caefb132c456e1bf47ee4a9a78ceed4c73727ffddda1a"} err="failed to get container status \"8fa18c9a114f0fb75e3caefb132c456e1bf47ee4a9a78ceed4c73727ffddda1a\": rpc error: code = NotFound desc = could not find container \"8fa18c9a114f0fb75e3caefb132c456e1bf47ee4a9a78ceed4c73727ffddda1a\": container with ID starting with 8fa18c9a114f0fb75e3caefb132c456e1bf47ee4a9a78ceed4c73727ffddda1a not found: ID does not exist" Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.674913 4870 scope.go:117] "RemoveContainer" containerID="7af4892aca934c251eb05093349732bec393c752388ebc4ceddfc05e111a4bb7" Feb 16 17:21:34 crc kubenswrapper[4870]: E0216 17:21:34.675477 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7af4892aca934c251eb05093349732bec393c752388ebc4ceddfc05e111a4bb7\": container with ID starting with 7af4892aca934c251eb05093349732bec393c752388ebc4ceddfc05e111a4bb7 not found: ID does not exist" containerID="7af4892aca934c251eb05093349732bec393c752388ebc4ceddfc05e111a4bb7" Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.675506 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7af4892aca934c251eb05093349732bec393c752388ebc4ceddfc05e111a4bb7"} err="failed to get container status \"7af4892aca934c251eb05093349732bec393c752388ebc4ceddfc05e111a4bb7\": rpc error: code = NotFound desc = could not find container \"7af4892aca934c251eb05093349732bec393c752388ebc4ceddfc05e111a4bb7\": container with ID starting with 7af4892aca934c251eb05093349732bec393c752388ebc4ceddfc05e111a4bb7 not found: ID does not exist" Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.696190 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.778591 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.877834 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-fvtqg"] Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.878559 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" podUID="ea7726c3-83d9-4ab1-99a5-7242373754fd" containerName="dnsmasq-dns" containerID="cri-o://4f8aba9f618f1a728a1043f77465183f4a7ce627dd8080c2b3919b93c828441b" gracePeriod=10 Feb 16 17:21:34 crc kubenswrapper[4870]: I0216 17:21:34.954285 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" podUID="ea7726c3-83d9-4ab1-99a5-7242373754fd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.178:5353: connect: connection refused" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.020183 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.367329 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.367680 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.445071 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.487975 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-ovsdbserver-nb\") pod \"ea7726c3-83d9-4ab1-99a5-7242373754fd\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.488043 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-config\") pod \"ea7726c3-83d9-4ab1-99a5-7242373754fd\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.488119 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-ovsdbserver-sb\") pod \"ea7726c3-83d9-4ab1-99a5-7242373754fd\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.488157 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-dns-swift-storage-0\") pod \"ea7726c3-83d9-4ab1-99a5-7242373754fd\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.488263 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-dns-svc\") pod \"ea7726c3-83d9-4ab1-99a5-7242373754fd\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.488290 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79mgs\" (UniqueName: \"kubernetes.io/projected/ea7726c3-83d9-4ab1-99a5-7242373754fd-kube-api-access-79mgs\") pod \"ea7726c3-83d9-4ab1-99a5-7242373754fd\" (UID: \"ea7726c3-83d9-4ab1-99a5-7242373754fd\") " Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.508218 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea7726c3-83d9-4ab1-99a5-7242373754fd-kube-api-access-79mgs" (OuterVolumeSpecName: "kube-api-access-79mgs") pod "ea7726c3-83d9-4ab1-99a5-7242373754fd" (UID: "ea7726c3-83d9-4ab1-99a5-7242373754fd"). InnerVolumeSpecName "kube-api-access-79mgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.569792 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-config" (OuterVolumeSpecName: "config") pod "ea7726c3-83d9-4ab1-99a5-7242373754fd" (UID: "ea7726c3-83d9-4ab1-99a5-7242373754fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.582218 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ea7726c3-83d9-4ab1-99a5-7242373754fd" (UID: "ea7726c3-83d9-4ab1-99a5-7242373754fd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.588411 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ea7726c3-83d9-4ab1-99a5-7242373754fd" (UID: "ea7726c3-83d9-4ab1-99a5-7242373754fd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.590312 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.590339 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.590350 4870 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.590361 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79mgs\" (UniqueName: \"kubernetes.io/projected/ea7726c3-83d9-4ab1-99a5-7242373754fd-kube-api-access-79mgs\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.601423 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ea7726c3-83d9-4ab1-99a5-7242373754fd" (UID: "ea7726c3-83d9-4ab1-99a5-7242373754fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.613476 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ea7726c3-83d9-4ab1-99a5-7242373754fd" (UID: "ea7726c3-83d9-4ab1-99a5-7242373754fd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.631248 4870 generic.go:334] "Generic (PLEG): container finished" podID="ea7726c3-83d9-4ab1-99a5-7242373754fd" containerID="4f8aba9f618f1a728a1043f77465183f4a7ce627dd8080c2b3919b93c828441b" exitCode=0 Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.631375 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.631812 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" event={"ID":"ea7726c3-83d9-4ab1-99a5-7242373754fd","Type":"ContainerDied","Data":"4f8aba9f618f1a728a1043f77465183f4a7ce627dd8080c2b3919b93c828441b"} Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.631868 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-fvtqg" event={"ID":"ea7726c3-83d9-4ab1-99a5-7242373754fd","Type":"ContainerDied","Data":"4dc6e16cba37c5d658630d9dc1180ef52c5ed94ca1e216afa58fae8ab7bef214"} Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.631886 4870 scope.go:117] "RemoveContainer" containerID="4f8aba9f618f1a728a1043f77465183f4a7ce627dd8080c2b3919b93c828441b" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.644231 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee3935ab-edfc-4f1d-ac06-32e332389334","Type":"ContainerStarted","Data":"2215d38d8d240ec90fad20daf7212c423ba48735ed26f8833d4afc8598fb5a86"} Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.691990 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.692249 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea7726c3-83d9-4ab1-99a5-7242373754fd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.732096 4870 scope.go:117] "RemoveContainer" containerID="58b07c4144b7431eca2620aff99bd277227d1eb684c8311ab21ee2f18e6906d6" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.739653 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.766012 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-fvtqg"] Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.776415 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-fvtqg"] Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.787418 4870 scope.go:117] "RemoveContainer" containerID="4f8aba9f618f1a728a1043f77465183f4a7ce627dd8080c2b3919b93c828441b" Feb 16 17:21:35 crc kubenswrapper[4870]: E0216 17:21:35.792244 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f8aba9f618f1a728a1043f77465183f4a7ce627dd8080c2b3919b93c828441b\": container with ID starting with 4f8aba9f618f1a728a1043f77465183f4a7ce627dd8080c2b3919b93c828441b not found: ID does not exist" containerID="4f8aba9f618f1a728a1043f77465183f4a7ce627dd8080c2b3919b93c828441b" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.792447 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f8aba9f618f1a728a1043f77465183f4a7ce627dd8080c2b3919b93c828441b"} err="failed to get container status \"4f8aba9f618f1a728a1043f77465183f4a7ce627dd8080c2b3919b93c828441b\": rpc error: code = NotFound desc = could not find container \"4f8aba9f618f1a728a1043f77465183f4a7ce627dd8080c2b3919b93c828441b\": container with ID starting with 4f8aba9f618f1a728a1043f77465183f4a7ce627dd8080c2b3919b93c828441b not found: ID does not exist" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.792529 4870 scope.go:117] "RemoveContainer" containerID="58b07c4144b7431eca2620aff99bd277227d1eb684c8311ab21ee2f18e6906d6" Feb 16 17:21:35 crc kubenswrapper[4870]: E0216 17:21:35.798067 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58b07c4144b7431eca2620aff99bd277227d1eb684c8311ab21ee2f18e6906d6\": container with ID starting with 58b07c4144b7431eca2620aff99bd277227d1eb684c8311ab21ee2f18e6906d6 not found: ID does not exist" containerID="58b07c4144b7431eca2620aff99bd277227d1eb684c8311ab21ee2f18e6906d6" Feb 16 17:21:35 crc kubenswrapper[4870]: I0216 17:21:35.798253 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58b07c4144b7431eca2620aff99bd277227d1eb684c8311ab21ee2f18e6906d6"} err="failed to get container status \"58b07c4144b7431eca2620aff99bd277227d1eb684c8311ab21ee2f18e6906d6\": rpc error: code = NotFound desc = could not find container \"58b07c4144b7431eca2620aff99bd277227d1eb684c8311ab21ee2f18e6906d6\": container with ID starting with 58b07c4144b7431eca2620aff99bd277227d1eb684c8311ab21ee2f18e6906d6 not found: ID does not exist" Feb 16 17:21:36 crc kubenswrapper[4870]: I0216 17:21:36.238836 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5faea269-54ff-4f1f-933c-e16bf517fa14" path="/var/lib/kubelet/pods/5faea269-54ff-4f1f-933c-e16bf517fa14/volumes" Feb 16 17:21:36 crc kubenswrapper[4870]: I0216 17:21:36.239868 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea7726c3-83d9-4ab1-99a5-7242373754fd" path="/var/lib/kubelet/pods/ea7726c3-83d9-4ab1-99a5-7242373754fd/volumes" Feb 16 17:21:36 crc kubenswrapper[4870]: I0216 17:21:36.703208 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee3935ab-edfc-4f1d-ac06-32e332389334","Type":"ContainerStarted","Data":"aa54c965768dd2b324933144e18df78fa4d3eef471fb68f9c56ebd58545061ce"} Feb 16 17:21:36 crc kubenswrapper[4870]: I0216 17:21:36.706721 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="85390668-2afa-4995-ada6-fe4a2b44afdb" containerName="cinder-scheduler" containerID="cri-o://dcc421c124679e744a63c615e3604536c74c1e2533ed2dfa960d5465aa26b09c" gracePeriod=30 Feb 16 17:21:36 crc kubenswrapper[4870]: I0216 17:21:36.707460 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="85390668-2afa-4995-ada6-fe4a2b44afdb" containerName="probe" containerID="cri-o://59bf16c6f7af67f78064ae9685764054d0a9e931c6a8a18de79002aac9a01529" gracePeriod=30 Feb 16 17:21:37 crc kubenswrapper[4870]: I0216 17:21:37.653878 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 17:21:37 crc kubenswrapper[4870]: I0216 17:21:37.731914 4870 generic.go:334] "Generic (PLEG): container finished" podID="85390668-2afa-4995-ada6-fe4a2b44afdb" containerID="59bf16c6f7af67f78064ae9685764054d0a9e931c6a8a18de79002aac9a01529" exitCode=0 Feb 16 17:21:37 crc kubenswrapper[4870]: I0216 17:21:37.731970 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85390668-2afa-4995-ada6-fe4a2b44afdb","Type":"ContainerDied","Data":"59bf16c6f7af67f78064ae9685764054d0a9e931c6a8a18de79002aac9a01529"} Feb 16 17:21:38 crc kubenswrapper[4870]: E0216 17:21:38.224775 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.235557 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.270887 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-combined-ca-bundle\") pod \"85390668-2afa-4995-ada6-fe4a2b44afdb\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.271053 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdphz\" (UniqueName: \"kubernetes.io/projected/85390668-2afa-4995-ada6-fe4a2b44afdb-kube-api-access-tdphz\") pod \"85390668-2afa-4995-ada6-fe4a2b44afdb\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.271086 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-config-data-custom\") pod \"85390668-2afa-4995-ada6-fe4a2b44afdb\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.271108 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85390668-2afa-4995-ada6-fe4a2b44afdb-etc-machine-id\") pod \"85390668-2afa-4995-ada6-fe4a2b44afdb\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.271232 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-scripts\") pod \"85390668-2afa-4995-ada6-fe4a2b44afdb\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.271262 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-config-data\") pod \"85390668-2afa-4995-ada6-fe4a2b44afdb\" (UID: \"85390668-2afa-4995-ada6-fe4a2b44afdb\") " Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.272219 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85390668-2afa-4995-ada6-fe4a2b44afdb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "85390668-2afa-4995-ada6-fe4a2b44afdb" (UID: "85390668-2afa-4995-ada6-fe4a2b44afdb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.285237 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "85390668-2afa-4995-ada6-fe4a2b44afdb" (UID: "85390668-2afa-4995-ada6-fe4a2b44afdb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.285295 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-scripts" (OuterVolumeSpecName: "scripts") pod "85390668-2afa-4995-ada6-fe4a2b44afdb" (UID: "85390668-2afa-4995-ada6-fe4a2b44afdb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.293286 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85390668-2afa-4995-ada6-fe4a2b44afdb-kube-api-access-tdphz" (OuterVolumeSpecName: "kube-api-access-tdphz") pod "85390668-2afa-4995-ada6-fe4a2b44afdb" (UID: "85390668-2afa-4995-ada6-fe4a2b44afdb"). InnerVolumeSpecName "kube-api-access-tdphz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.360121 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85390668-2afa-4995-ada6-fe4a2b44afdb" (UID: "85390668-2afa-4995-ada6-fe4a2b44afdb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.374766 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdphz\" (UniqueName: \"kubernetes.io/projected/85390668-2afa-4995-ada6-fe4a2b44afdb-kube-api-access-tdphz\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.374803 4870 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.374814 4870 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85390668-2afa-4995-ada6-fe4a2b44afdb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.374844 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.374852 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.425111 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-config-data" (OuterVolumeSpecName: "config-data") pod "85390668-2afa-4995-ada6-fe4a2b44afdb" (UID: "85390668-2afa-4995-ada6-fe4a2b44afdb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.476401 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85390668-2afa-4995-ada6-fe4a2b44afdb-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.754798 4870 generic.go:334] "Generic (PLEG): container finished" podID="85390668-2afa-4995-ada6-fe4a2b44afdb" containerID="dcc421c124679e744a63c615e3604536c74c1e2533ed2dfa960d5465aa26b09c" exitCode=0 Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.754838 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85390668-2afa-4995-ada6-fe4a2b44afdb","Type":"ContainerDied","Data":"dcc421c124679e744a63c615e3604536c74c1e2533ed2dfa960d5465aa26b09c"} Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.754872 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85390668-2afa-4995-ada6-fe4a2b44afdb","Type":"ContainerDied","Data":"8762aaf83b80c650646d00d537b7824a638f0a0e2d75c844fd776daf6306bec2"} Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.754889 4870 scope.go:117] "RemoveContainer" containerID="59bf16c6f7af67f78064ae9685764054d0a9e931c6a8a18de79002aac9a01529" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.755028 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.789168 4870 scope.go:117] "RemoveContainer" containerID="dcc421c124679e744a63c615e3604536c74c1e2533ed2dfa960d5465aa26b09c" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.790487 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.806101 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.817170 4870 scope.go:117] "RemoveContainer" containerID="59bf16c6f7af67f78064ae9685764054d0a9e931c6a8a18de79002aac9a01529" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.814940 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 17:21:39 crc kubenswrapper[4870]: E0216 17:21:39.818589 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7726c3-83d9-4ab1-99a5-7242373754fd" containerName="init" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.818604 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7726c3-83d9-4ab1-99a5-7242373754fd" containerName="init" Feb 16 17:21:39 crc kubenswrapper[4870]: E0216 17:21:39.818617 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85390668-2afa-4995-ada6-fe4a2b44afdb" containerName="cinder-scheduler" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.818623 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="85390668-2afa-4995-ada6-fe4a2b44afdb" containerName="cinder-scheduler" Feb 16 17:21:39 crc kubenswrapper[4870]: E0216 17:21:39.818637 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7726c3-83d9-4ab1-99a5-7242373754fd" containerName="dnsmasq-dns" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.818643 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7726c3-83d9-4ab1-99a5-7242373754fd" containerName="dnsmasq-dns" Feb 16 17:21:39 crc kubenswrapper[4870]: E0216 17:21:39.818654 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85390668-2afa-4995-ada6-fe4a2b44afdb" containerName="probe" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.818659 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="85390668-2afa-4995-ada6-fe4a2b44afdb" containerName="probe" Feb 16 17:21:39 crc kubenswrapper[4870]: E0216 17:21:39.818683 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5faea269-54ff-4f1f-933c-e16bf517fa14" containerName="barbican-api-log" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.818688 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="5faea269-54ff-4f1f-933c-e16bf517fa14" containerName="barbican-api-log" Feb 16 17:21:39 crc kubenswrapper[4870]: E0216 17:21:39.818697 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5faea269-54ff-4f1f-933c-e16bf517fa14" containerName="barbican-api" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.818703 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="5faea269-54ff-4f1f-933c-e16bf517fa14" containerName="barbican-api" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.820339 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="85390668-2afa-4995-ada6-fe4a2b44afdb" containerName="cinder-scheduler" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.820359 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea7726c3-83d9-4ab1-99a5-7242373754fd" containerName="dnsmasq-dns" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.820370 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="85390668-2afa-4995-ada6-fe4a2b44afdb" containerName="probe" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.820389 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="5faea269-54ff-4f1f-933c-e16bf517fa14" containerName="barbican-api-log" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.820403 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="5faea269-54ff-4f1f-933c-e16bf517fa14" containerName="barbican-api" Feb 16 17:21:39 crc kubenswrapper[4870]: E0216 17:21:39.821442 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59bf16c6f7af67f78064ae9685764054d0a9e931c6a8a18de79002aac9a01529\": container with ID starting with 59bf16c6f7af67f78064ae9685764054d0a9e931c6a8a18de79002aac9a01529 not found: ID does not exist" containerID="59bf16c6f7af67f78064ae9685764054d0a9e931c6a8a18de79002aac9a01529" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.821517 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59bf16c6f7af67f78064ae9685764054d0a9e931c6a8a18de79002aac9a01529"} err="failed to get container status \"59bf16c6f7af67f78064ae9685764054d0a9e931c6a8a18de79002aac9a01529\": rpc error: code = NotFound desc = could not find container \"59bf16c6f7af67f78064ae9685764054d0a9e931c6a8a18de79002aac9a01529\": container with ID starting with 59bf16c6f7af67f78064ae9685764054d0a9e931c6a8a18de79002aac9a01529 not found: ID does not exist" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.821552 4870 scope.go:117] "RemoveContainer" containerID="dcc421c124679e744a63c615e3604536c74c1e2533ed2dfa960d5465aa26b09c" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.822279 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 17:21:39 crc kubenswrapper[4870]: E0216 17:21:39.824279 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcc421c124679e744a63c615e3604536c74c1e2533ed2dfa960d5465aa26b09c\": container with ID starting with dcc421c124679e744a63c615e3604536c74c1e2533ed2dfa960d5465aa26b09c not found: ID does not exist" containerID="dcc421c124679e744a63c615e3604536c74c1e2533ed2dfa960d5465aa26b09c" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.824346 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcc421c124679e744a63c615e3604536c74c1e2533ed2dfa960d5465aa26b09c"} err="failed to get container status \"dcc421c124679e744a63c615e3604536c74c1e2533ed2dfa960d5465aa26b09c\": rpc error: code = NotFound desc = could not find container \"dcc421c124679e744a63c615e3604536c74c1e2533ed2dfa960d5465aa26b09c\": container with ID starting with dcc421c124679e744a63c615e3604536c74c1e2533ed2dfa960d5465aa26b09c not found: ID does not exist" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.828477 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.841001 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.992715 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f1bdefa-44ab-4760-9fe6-fea5802dfde1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1f1bdefa-44ab-4760-9fe6-fea5802dfde1\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.993047 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f1bdefa-44ab-4760-9fe6-fea5802dfde1-scripts\") pod \"cinder-scheduler-0\" (UID: \"1f1bdefa-44ab-4760-9fe6-fea5802dfde1\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.993184 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f1bdefa-44ab-4760-9fe6-fea5802dfde1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1f1bdefa-44ab-4760-9fe6-fea5802dfde1\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.993298 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f1bdefa-44ab-4760-9fe6-fea5802dfde1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1f1bdefa-44ab-4760-9fe6-fea5802dfde1\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.993473 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvhqf\" (UniqueName: \"kubernetes.io/projected/1f1bdefa-44ab-4760-9fe6-fea5802dfde1-kube-api-access-nvhqf\") pod \"cinder-scheduler-0\" (UID: \"1f1bdefa-44ab-4760-9fe6-fea5802dfde1\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:39 crc kubenswrapper[4870]: I0216 17:21:39.993686 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f1bdefa-44ab-4760-9fe6-fea5802dfde1-config-data\") pod \"cinder-scheduler-0\" (UID: \"1f1bdefa-44ab-4760-9fe6-fea5802dfde1\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:40 crc kubenswrapper[4870]: I0216 17:21:40.096447 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvhqf\" (UniqueName: \"kubernetes.io/projected/1f1bdefa-44ab-4760-9fe6-fea5802dfde1-kube-api-access-nvhqf\") pod \"cinder-scheduler-0\" (UID: \"1f1bdefa-44ab-4760-9fe6-fea5802dfde1\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:40 crc kubenswrapper[4870]: I0216 17:21:40.096585 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f1bdefa-44ab-4760-9fe6-fea5802dfde1-config-data\") pod \"cinder-scheduler-0\" (UID: \"1f1bdefa-44ab-4760-9fe6-fea5802dfde1\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:40 crc kubenswrapper[4870]: I0216 17:21:40.096661 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f1bdefa-44ab-4760-9fe6-fea5802dfde1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1f1bdefa-44ab-4760-9fe6-fea5802dfde1\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:40 crc kubenswrapper[4870]: I0216 17:21:40.096695 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f1bdefa-44ab-4760-9fe6-fea5802dfde1-scripts\") pod \"cinder-scheduler-0\" (UID: \"1f1bdefa-44ab-4760-9fe6-fea5802dfde1\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:40 crc kubenswrapper[4870]: I0216 17:21:40.096726 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f1bdefa-44ab-4760-9fe6-fea5802dfde1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1f1bdefa-44ab-4760-9fe6-fea5802dfde1\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:40 crc kubenswrapper[4870]: I0216 17:21:40.096754 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f1bdefa-44ab-4760-9fe6-fea5802dfde1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1f1bdefa-44ab-4760-9fe6-fea5802dfde1\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:40 crc kubenswrapper[4870]: I0216 17:21:40.097926 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f1bdefa-44ab-4760-9fe6-fea5802dfde1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1f1bdefa-44ab-4760-9fe6-fea5802dfde1\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:40 crc kubenswrapper[4870]: I0216 17:21:40.103392 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f1bdefa-44ab-4760-9fe6-fea5802dfde1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1f1bdefa-44ab-4760-9fe6-fea5802dfde1\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:40 crc kubenswrapper[4870]: I0216 17:21:40.115554 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f1bdefa-44ab-4760-9fe6-fea5802dfde1-scripts\") pod \"cinder-scheduler-0\" (UID: \"1f1bdefa-44ab-4760-9fe6-fea5802dfde1\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:40 crc kubenswrapper[4870]: I0216 17:21:40.116690 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f1bdefa-44ab-4760-9fe6-fea5802dfde1-config-data\") pod \"cinder-scheduler-0\" (UID: \"1f1bdefa-44ab-4760-9fe6-fea5802dfde1\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:40 crc kubenswrapper[4870]: I0216 17:21:40.117631 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvhqf\" (UniqueName: \"kubernetes.io/projected/1f1bdefa-44ab-4760-9fe6-fea5802dfde1-kube-api-access-nvhqf\") pod \"cinder-scheduler-0\" (UID: \"1f1bdefa-44ab-4760-9fe6-fea5802dfde1\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:40 crc kubenswrapper[4870]: I0216 17:21:40.118068 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f1bdefa-44ab-4760-9fe6-fea5802dfde1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1f1bdefa-44ab-4760-9fe6-fea5802dfde1\") " pod="openstack/cinder-scheduler-0" Feb 16 17:21:40 crc kubenswrapper[4870]: I0216 17:21:40.244183 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85390668-2afa-4995-ada6-fe4a2b44afdb" path="/var/lib/kubelet/pods/85390668-2afa-4995-ada6-fe4a2b44afdb/volumes" Feb 16 17:21:40 crc kubenswrapper[4870]: I0216 17:21:40.321494 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 17:21:40 crc kubenswrapper[4870]: I0216 17:21:40.767426 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee3935ab-edfc-4f1d-ac06-32e332389334","Type":"ContainerStarted","Data":"fc51b2ec57ea80cc70aeced8207f04c22aebcd43e593dcd63d5c771b174d20f4"} Feb 16 17:21:40 crc kubenswrapper[4870]: I0216 17:21:40.767903 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 17:21:40 crc kubenswrapper[4870]: I0216 17:21:40.834164 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.675130484 podStartE2EDuration="9.834145904s" podCreationTimestamp="2026-02-16 17:21:31 +0000 UTC" firstStartedPulling="2026-02-16 17:21:33.704653023 +0000 UTC m=+1298.188117407" lastFinishedPulling="2026-02-16 17:21:39.863668453 +0000 UTC m=+1304.347132827" observedRunningTime="2026-02-16 17:21:40.796818519 +0000 UTC m=+1305.280282903" watchObservedRunningTime="2026-02-16 17:21:40.834145904 +0000 UTC m=+1305.317610288" Feb 16 17:21:40 crc kubenswrapper[4870]: I0216 17:21:40.985663 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 17:21:40 crc kubenswrapper[4870]: W0216 17:21:40.990338 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f1bdefa_44ab_4760_9fe6_fea5802dfde1.slice/crio-d177062e6f34aed03b6a14fca9817878ce81d903977054423b86f3ed7c857b48 WatchSource:0}: Error finding container d177062e6f34aed03b6a14fca9817878ce81d903977054423b86f3ed7c857b48: Status 404 returned error can't find the container with id d177062e6f34aed03b6a14fca9817878ce81d903977054423b86f3ed7c857b48 Feb 16 17:21:41 crc kubenswrapper[4870]: I0216 17:21:41.430609 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:41 crc kubenswrapper[4870]: I0216 17:21:41.448457 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:41 crc kubenswrapper[4870]: I0216 17:21:41.601231 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:41 crc kubenswrapper[4870]: I0216 17:21:41.680226 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-756f867d68-hgndg" Feb 16 17:21:41 crc kubenswrapper[4870]: I0216 17:21:41.769423 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-798bd5bd64-st2b8"] Feb 16 17:21:41 crc kubenswrapper[4870]: I0216 17:21:41.787273 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f1bdefa-44ab-4760-9fe6-fea5802dfde1","Type":"ContainerStarted","Data":"d177062e6f34aed03b6a14fca9817878ce81d903977054423b86f3ed7c857b48"} Feb 16 17:21:41 crc kubenswrapper[4870]: I0216 17:21:41.858064 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-568fd566f-ltx6b" Feb 16 17:21:42 crc kubenswrapper[4870]: I0216 17:21:42.798312 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f1bdefa-44ab-4760-9fe6-fea5802dfde1","Type":"ContainerStarted","Data":"b2ef562a1d75606d48683a194ced484cb683524403ed1a9c44ece1ab7326d628"} Feb 16 17:21:42 crc kubenswrapper[4870]: I0216 17:21:42.798782 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f1bdefa-44ab-4760-9fe6-fea5802dfde1","Type":"ContainerStarted","Data":"daea5550dc861804302d88e4887f14d1745a5e57cbb16e4e208fbd0326d1bcba"} Feb 16 17:21:42 crc kubenswrapper[4870]: I0216 17:21:42.798557 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-798bd5bd64-st2b8" podUID="96bd7dab-7469-4449-b1dd-dc57aa17c27c" containerName="placement-api" containerID="cri-o://a6cee6c28b7c7fc1131e837e8c8819fcd918c71ce285cd767866175559afc576" gracePeriod=30 Feb 16 17:21:42 crc kubenswrapper[4870]: I0216 17:21:42.798439 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-798bd5bd64-st2b8" podUID="96bd7dab-7469-4449-b1dd-dc57aa17c27c" containerName="placement-log" containerID="cri-o://065dd553f16ab9b79618d72a86c86f3480fb5e6489a41cc38b8fc388a4af6d86" gracePeriod=30 Feb 16 17:21:42 crc kubenswrapper[4870]: I0216 17:21:42.834137 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.834119398 podStartE2EDuration="3.834119398s" podCreationTimestamp="2026-02-16 17:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:42.826360646 +0000 UTC m=+1307.309825030" watchObservedRunningTime="2026-02-16 17:21:42.834119398 +0000 UTC m=+1307.317583782" Feb 16 17:21:43 crc kubenswrapper[4870]: I0216 17:21:43.810579 4870 generic.go:334] "Generic (PLEG): container finished" podID="96bd7dab-7469-4449-b1dd-dc57aa17c27c" containerID="065dd553f16ab9b79618d72a86c86f3480fb5e6489a41cc38b8fc388a4af6d86" exitCode=143 Feb 16 17:21:43 crc kubenswrapper[4870]: I0216 17:21:43.811602 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-798bd5bd64-st2b8" event={"ID":"96bd7dab-7469-4449-b1dd-dc57aa17c27c","Type":"ContainerDied","Data":"065dd553f16ab9b79618d72a86c86f3480fb5e6489a41cc38b8fc388a4af6d86"} Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.322871 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.349528 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.456879 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/291e56ef-3b45-4a21-875c-f90daaf45e0b-logs\") pod \"291e56ef-3b45-4a21-875c-f90daaf45e0b\" (UID: \"291e56ef-3b45-4a21-875c-f90daaf45e0b\") " Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.457108 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgcsc\" (UniqueName: \"kubernetes.io/projected/291e56ef-3b45-4a21-875c-f90daaf45e0b-kube-api-access-cgcsc\") pod \"291e56ef-3b45-4a21-875c-f90daaf45e0b\" (UID: \"291e56ef-3b45-4a21-875c-f90daaf45e0b\") " Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.457217 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/291e56ef-3b45-4a21-875c-f90daaf45e0b-config-data\") pod \"291e56ef-3b45-4a21-875c-f90daaf45e0b\" (UID: \"291e56ef-3b45-4a21-875c-f90daaf45e0b\") " Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.457258 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/291e56ef-3b45-4a21-875c-f90daaf45e0b-config-data-custom\") pod \"291e56ef-3b45-4a21-875c-f90daaf45e0b\" (UID: \"291e56ef-3b45-4a21-875c-f90daaf45e0b\") " Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.457387 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291e56ef-3b45-4a21-875c-f90daaf45e0b-combined-ca-bundle\") pod \"291e56ef-3b45-4a21-875c-f90daaf45e0b\" (UID: \"291e56ef-3b45-4a21-875c-f90daaf45e0b\") " Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.457564 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/291e56ef-3b45-4a21-875c-f90daaf45e0b-logs" (OuterVolumeSpecName: "logs") pod "291e56ef-3b45-4a21-875c-f90daaf45e0b" (UID: "291e56ef-3b45-4a21-875c-f90daaf45e0b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.458409 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/291e56ef-3b45-4a21-875c-f90daaf45e0b-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.463011 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291e56ef-3b45-4a21-875c-f90daaf45e0b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "291e56ef-3b45-4a21-875c-f90daaf45e0b" (UID: "291e56ef-3b45-4a21-875c-f90daaf45e0b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.488197 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/291e56ef-3b45-4a21-875c-f90daaf45e0b-kube-api-access-cgcsc" (OuterVolumeSpecName: "kube-api-access-cgcsc") pod "291e56ef-3b45-4a21-875c-f90daaf45e0b" (UID: "291e56ef-3b45-4a21-875c-f90daaf45e0b"). InnerVolumeSpecName "kube-api-access-cgcsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.512347 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291e56ef-3b45-4a21-875c-f90daaf45e0b-config-data" (OuterVolumeSpecName: "config-data") pod "291e56ef-3b45-4a21-875c-f90daaf45e0b" (UID: "291e56ef-3b45-4a21-875c-f90daaf45e0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.517598 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291e56ef-3b45-4a21-875c-f90daaf45e0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "291e56ef-3b45-4a21-875c-f90daaf45e0b" (UID: "291e56ef-3b45-4a21-875c-f90daaf45e0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.560750 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgcsc\" (UniqueName: \"kubernetes.io/projected/291e56ef-3b45-4a21-875c-f90daaf45e0b-kube-api-access-cgcsc\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.560805 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/291e56ef-3b45-4a21-875c-f90daaf45e0b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.560817 4870 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/291e56ef-3b45-4a21-875c-f90daaf45e0b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.560826 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291e56ef-3b45-4a21-875c-f90daaf45e0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.670072 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-8f5dc8565-bnkj8"] Feb 16 17:21:45 crc kubenswrapper[4870]: E0216 17:21:45.670589 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="291e56ef-3b45-4a21-875c-f90daaf45e0b" containerName="barbican-api" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.670611 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="291e56ef-3b45-4a21-875c-f90daaf45e0b" containerName="barbican-api" Feb 16 17:21:45 crc kubenswrapper[4870]: E0216 17:21:45.670648 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="291e56ef-3b45-4a21-875c-f90daaf45e0b" containerName="barbican-api-log" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.670656 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="291e56ef-3b45-4a21-875c-f90daaf45e0b" containerName="barbican-api-log" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.670882 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="291e56ef-3b45-4a21-875c-f90daaf45e0b" containerName="barbican-api" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.670917 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="291e56ef-3b45-4a21-875c-f90daaf45e0b" containerName="barbican-api-log" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.672255 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.678238 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.678387 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.679987 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.699855 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-8f5dc8565-bnkj8"] Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.832108 4870 generic.go:334] "Generic (PLEG): container finished" podID="291e56ef-3b45-4a21-875c-f90daaf45e0b" containerID="d2e81256ba3cf850d427a2f81fa1b2115f18a2b6026e37ae0d638344cf6afa2f" exitCode=137 Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.832165 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dbb74864-cqlt9" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.832174 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dbb74864-cqlt9" event={"ID":"291e56ef-3b45-4a21-875c-f90daaf45e0b","Type":"ContainerDied","Data":"d2e81256ba3cf850d427a2f81fa1b2115f18a2b6026e37ae0d638344cf6afa2f"} Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.832234 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dbb74864-cqlt9" event={"ID":"291e56ef-3b45-4a21-875c-f90daaf45e0b","Type":"ContainerDied","Data":"7887937d3123398f4cd84ecaf1469495c51d0db32677ddb49b3e65694fa1d308"} Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.832260 4870 scope.go:117] "RemoveContainer" containerID="d2e81256ba3cf850d427a2f81fa1b2115f18a2b6026e37ae0d638344cf6afa2f" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.866802 4870 scope.go:117] "RemoveContainer" containerID="9b695a6011d66493e4f8cc271c1a6eea680f0af61579eaf18507cf4124d6731c" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.868707 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc25f232-f484-409d-ac24-fc126dc679d4-log-httpd\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.868723 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6dbb74864-cqlt9"] Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.868778 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc25f232-f484-409d-ac24-fc126dc679d4-combined-ca-bundle\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.869204 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57rqs\" (UniqueName: \"kubernetes.io/projected/cc25f232-f484-409d-ac24-fc126dc679d4-kube-api-access-57rqs\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.869247 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc25f232-f484-409d-ac24-fc126dc679d4-public-tls-certs\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.869334 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc25f232-f484-409d-ac24-fc126dc679d4-internal-tls-certs\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.869913 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cc25f232-f484-409d-ac24-fc126dc679d4-etc-swift\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.870014 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc25f232-f484-409d-ac24-fc126dc679d4-config-data\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.870116 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc25f232-f484-409d-ac24-fc126dc679d4-run-httpd\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.880280 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6dbb74864-cqlt9"] Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.949198 4870 scope.go:117] "RemoveContainer" containerID="d2e81256ba3cf850d427a2f81fa1b2115f18a2b6026e37ae0d638344cf6afa2f" Feb 16 17:21:45 crc kubenswrapper[4870]: E0216 17:21:45.949959 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2e81256ba3cf850d427a2f81fa1b2115f18a2b6026e37ae0d638344cf6afa2f\": container with ID starting with d2e81256ba3cf850d427a2f81fa1b2115f18a2b6026e37ae0d638344cf6afa2f not found: ID does not exist" containerID="d2e81256ba3cf850d427a2f81fa1b2115f18a2b6026e37ae0d638344cf6afa2f" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.949983 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2e81256ba3cf850d427a2f81fa1b2115f18a2b6026e37ae0d638344cf6afa2f"} err="failed to get container status \"d2e81256ba3cf850d427a2f81fa1b2115f18a2b6026e37ae0d638344cf6afa2f\": rpc error: code = NotFound desc = could not find container \"d2e81256ba3cf850d427a2f81fa1b2115f18a2b6026e37ae0d638344cf6afa2f\": container with ID starting with d2e81256ba3cf850d427a2f81fa1b2115f18a2b6026e37ae0d638344cf6afa2f not found: ID does not exist" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.950003 4870 scope.go:117] "RemoveContainer" containerID="9b695a6011d66493e4f8cc271c1a6eea680f0af61579eaf18507cf4124d6731c" Feb 16 17:21:45 crc kubenswrapper[4870]: E0216 17:21:45.950465 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b695a6011d66493e4f8cc271c1a6eea680f0af61579eaf18507cf4124d6731c\": container with ID starting with 9b695a6011d66493e4f8cc271c1a6eea680f0af61579eaf18507cf4124d6731c not found: ID does not exist" containerID="9b695a6011d66493e4f8cc271c1a6eea680f0af61579eaf18507cf4124d6731c" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.950483 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b695a6011d66493e4f8cc271c1a6eea680f0af61579eaf18507cf4124d6731c"} err="failed to get container status \"9b695a6011d66493e4f8cc271c1a6eea680f0af61579eaf18507cf4124d6731c\": rpc error: code = NotFound desc = could not find container \"9b695a6011d66493e4f8cc271c1a6eea680f0af61579eaf18507cf4124d6731c\": container with ID starting with 9b695a6011d66493e4f8cc271c1a6eea680f0af61579eaf18507cf4124d6731c not found: ID does not exist" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.971806 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc25f232-f484-409d-ac24-fc126dc679d4-run-httpd\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.971906 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc25f232-f484-409d-ac24-fc126dc679d4-log-httpd\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.971942 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc25f232-f484-409d-ac24-fc126dc679d4-combined-ca-bundle\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.972024 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57rqs\" (UniqueName: \"kubernetes.io/projected/cc25f232-f484-409d-ac24-fc126dc679d4-kube-api-access-57rqs\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.972047 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc25f232-f484-409d-ac24-fc126dc679d4-public-tls-certs\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.972079 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc25f232-f484-409d-ac24-fc126dc679d4-internal-tls-certs\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.972104 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cc25f232-f484-409d-ac24-fc126dc679d4-etc-swift\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.972133 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc25f232-f484-409d-ac24-fc126dc679d4-config-data\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.973808 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc25f232-f484-409d-ac24-fc126dc679d4-run-httpd\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.974092 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc25f232-f484-409d-ac24-fc126dc679d4-log-httpd\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.984156 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc25f232-f484-409d-ac24-fc126dc679d4-public-tls-certs\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.984176 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc25f232-f484-409d-ac24-fc126dc679d4-combined-ca-bundle\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.985559 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc25f232-f484-409d-ac24-fc126dc679d4-internal-tls-certs\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.986443 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc25f232-f484-409d-ac24-fc126dc679d4-config-data\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.986927 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cc25f232-f484-409d-ac24-fc126dc679d4-etc-swift\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:45 crc kubenswrapper[4870]: I0216 17:21:45.990639 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57rqs\" (UniqueName: \"kubernetes.io/projected/cc25f232-f484-409d-ac24-fc126dc679d4-kube-api-access-57rqs\") pod \"swift-proxy-8f5dc8565-bnkj8\" (UID: \"cc25f232-f484-409d-ac24-fc126dc679d4\") " pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.002935 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.248862 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="291e56ef-3b45-4a21-875c-f90daaf45e0b" path="/var/lib/kubelet/pods/291e56ef-3b45-4a21-875c-f90daaf45e0b/volumes" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.559835 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.648762 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 17:21:46 crc kubenswrapper[4870]: E0216 17:21:46.649223 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96bd7dab-7469-4449-b1dd-dc57aa17c27c" containerName="placement-log" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.649239 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="96bd7dab-7469-4449-b1dd-dc57aa17c27c" containerName="placement-log" Feb 16 17:21:46 crc kubenswrapper[4870]: E0216 17:21:46.649269 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96bd7dab-7469-4449-b1dd-dc57aa17c27c" containerName="placement-api" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.649276 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="96bd7dab-7469-4449-b1dd-dc57aa17c27c" containerName="placement-api" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.649459 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="96bd7dab-7469-4449-b1dd-dc57aa17c27c" containerName="placement-log" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.649484 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="96bd7dab-7469-4449-b1dd-dc57aa17c27c" containerName="placement-api" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.650164 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.652868 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.653028 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-9f5nh" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.656183 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.661690 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.696619 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96bd7dab-7469-4449-b1dd-dc57aa17c27c-logs\") pod \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.696689 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-combined-ca-bundle\") pod \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.696754 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-config-data\") pod \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.696782 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-internal-tls-certs\") pod \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.696813 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt4ph\" (UniqueName: \"kubernetes.io/projected/96bd7dab-7469-4449-b1dd-dc57aa17c27c-kube-api-access-qt4ph\") pod \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.696839 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-scripts\") pod \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.696989 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-public-tls-certs\") pod \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\" (UID: \"96bd7dab-7469-4449-b1dd-dc57aa17c27c\") " Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.705848 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-scripts" (OuterVolumeSpecName: "scripts") pod "96bd7dab-7469-4449-b1dd-dc57aa17c27c" (UID: "96bd7dab-7469-4449-b1dd-dc57aa17c27c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.706321 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96bd7dab-7469-4449-b1dd-dc57aa17c27c-kube-api-access-qt4ph" (OuterVolumeSpecName: "kube-api-access-qt4ph") pod "96bd7dab-7469-4449-b1dd-dc57aa17c27c" (UID: "96bd7dab-7469-4449-b1dd-dc57aa17c27c"). InnerVolumeSpecName "kube-api-access-qt4ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.708381 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96bd7dab-7469-4449-b1dd-dc57aa17c27c-logs" (OuterVolumeSpecName: "logs") pod "96bd7dab-7469-4449-b1dd-dc57aa17c27c" (UID: "96bd7dab-7469-4449-b1dd-dc57aa17c27c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.760811 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-config-data" (OuterVolumeSpecName: "config-data") pod "96bd7dab-7469-4449-b1dd-dc57aa17c27c" (UID: "96bd7dab-7469-4449-b1dd-dc57aa17c27c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.788263 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-8f5dc8565-bnkj8"] Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.788562 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96bd7dab-7469-4449-b1dd-dc57aa17c27c" (UID: "96bd7dab-7469-4449-b1dd-dc57aa17c27c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.798996 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3ff62ee0-06f4-4abb-90d5-637ae84d61d7\") " pod="openstack/openstackclient" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.799116 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhlcg\" (UniqueName: \"kubernetes.io/projected/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-kube-api-access-vhlcg\") pod \"openstackclient\" (UID: \"3ff62ee0-06f4-4abb-90d5-637ae84d61d7\") " pod="openstack/openstackclient" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.799187 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-openstack-config-secret\") pod \"openstackclient\" (UID: \"3ff62ee0-06f4-4abb-90d5-637ae84d61d7\") " pod="openstack/openstackclient" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.799288 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-openstack-config\") pod \"openstackclient\" (UID: \"3ff62ee0-06f4-4abb-90d5-637ae84d61d7\") " pod="openstack/openstackclient" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.799415 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96bd7dab-7469-4449-b1dd-dc57aa17c27c-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.799434 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.799447 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.799460 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt4ph\" (UniqueName: \"kubernetes.io/projected/96bd7dab-7469-4449-b1dd-dc57aa17c27c-kube-api-access-qt4ph\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.799470 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.842374 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8f5dc8565-bnkj8" event={"ID":"cc25f232-f484-409d-ac24-fc126dc679d4","Type":"ContainerStarted","Data":"c742c3ba383388555c30b6cae6023a310c792f343653a5b473cf3b9b94523245"} Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.844816 4870 generic.go:334] "Generic (PLEG): container finished" podID="96bd7dab-7469-4449-b1dd-dc57aa17c27c" containerID="a6cee6c28b7c7fc1131e837e8c8819fcd918c71ce285cd767866175559afc576" exitCode=0 Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.844846 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-798bd5bd64-st2b8" event={"ID":"96bd7dab-7469-4449-b1dd-dc57aa17c27c","Type":"ContainerDied","Data":"a6cee6c28b7c7fc1131e837e8c8819fcd918c71ce285cd767866175559afc576"} Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.844865 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-798bd5bd64-st2b8" event={"ID":"96bd7dab-7469-4449-b1dd-dc57aa17c27c","Type":"ContainerDied","Data":"1de2d4c753b27663799ff9e9faba61e66339b178a4eab40c3d98886fb7281f31"} Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.844881 4870 scope.go:117] "RemoveContainer" containerID="a6cee6c28b7c7fc1131e837e8c8819fcd918c71ce285cd767866175559afc576" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.845025 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-798bd5bd64-st2b8" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.849768 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "96bd7dab-7469-4449-b1dd-dc57aa17c27c" (UID: "96bd7dab-7469-4449-b1dd-dc57aa17c27c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.857143 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "96bd7dab-7469-4449-b1dd-dc57aa17c27c" (UID: "96bd7dab-7469-4449-b1dd-dc57aa17c27c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.903113 4870 scope.go:117] "RemoveContainer" containerID="065dd553f16ab9b79618d72a86c86f3480fb5e6489a41cc38b8fc388a4af6d86" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.909916 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3ff62ee0-06f4-4abb-90d5-637ae84d61d7\") " pod="openstack/openstackclient" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.910053 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhlcg\" (UniqueName: \"kubernetes.io/projected/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-kube-api-access-vhlcg\") pod \"openstackclient\" (UID: \"3ff62ee0-06f4-4abb-90d5-637ae84d61d7\") " pod="openstack/openstackclient" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.910112 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-openstack-config-secret\") pod \"openstackclient\" (UID: \"3ff62ee0-06f4-4abb-90d5-637ae84d61d7\") " pod="openstack/openstackclient" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.910189 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-openstack-config\") pod \"openstackclient\" (UID: \"3ff62ee0-06f4-4abb-90d5-637ae84d61d7\") " pod="openstack/openstackclient" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.911604 4870 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.912970 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-openstack-config\") pod \"openstackclient\" (UID: \"3ff62ee0-06f4-4abb-90d5-637ae84d61d7\") " pod="openstack/openstackclient" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.913478 4870 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bd7dab-7469-4449-b1dd-dc57aa17c27c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.915000 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-openstack-config-secret\") pod \"openstackclient\" (UID: \"3ff62ee0-06f4-4abb-90d5-637ae84d61d7\") " pod="openstack/openstackclient" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.916272 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3ff62ee0-06f4-4abb-90d5-637ae84d61d7\") " pod="openstack/openstackclient" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.930635 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhlcg\" (UniqueName: \"kubernetes.io/projected/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-kube-api-access-vhlcg\") pod \"openstackclient\" (UID: \"3ff62ee0-06f4-4abb-90d5-637ae84d61d7\") " pod="openstack/openstackclient" Feb 16 17:21:46 crc kubenswrapper[4870]: I0216 17:21:46.986722 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.044074 4870 scope.go:117] "RemoveContainer" containerID="a6cee6c28b7c7fc1131e837e8c8819fcd918c71ce285cd767866175559afc576" Feb 16 17:21:47 crc kubenswrapper[4870]: E0216 17:21:47.046690 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6cee6c28b7c7fc1131e837e8c8819fcd918c71ce285cd767866175559afc576\": container with ID starting with a6cee6c28b7c7fc1131e837e8c8819fcd918c71ce285cd767866175559afc576 not found: ID does not exist" containerID="a6cee6c28b7c7fc1131e837e8c8819fcd918c71ce285cd767866175559afc576" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.046732 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6cee6c28b7c7fc1131e837e8c8819fcd918c71ce285cd767866175559afc576"} err="failed to get container status \"a6cee6c28b7c7fc1131e837e8c8819fcd918c71ce285cd767866175559afc576\": rpc error: code = NotFound desc = could not find container \"a6cee6c28b7c7fc1131e837e8c8819fcd918c71ce285cd767866175559afc576\": container with ID starting with a6cee6c28b7c7fc1131e837e8c8819fcd918c71ce285cd767866175559afc576 not found: ID does not exist" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.046759 4870 scope.go:117] "RemoveContainer" containerID="065dd553f16ab9b79618d72a86c86f3480fb5e6489a41cc38b8fc388a4af6d86" Feb 16 17:21:47 crc kubenswrapper[4870]: E0216 17:21:47.048455 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"065dd553f16ab9b79618d72a86c86f3480fb5e6489a41cc38b8fc388a4af6d86\": container with ID starting with 065dd553f16ab9b79618d72a86c86f3480fb5e6489a41cc38b8fc388a4af6d86 not found: ID does not exist" containerID="065dd553f16ab9b79618d72a86c86f3480fb5e6489a41cc38b8fc388a4af6d86" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.048505 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"065dd553f16ab9b79618d72a86c86f3480fb5e6489a41cc38b8fc388a4af6d86"} err="failed to get container status \"065dd553f16ab9b79618d72a86c86f3480fb5e6489a41cc38b8fc388a4af6d86\": rpc error: code = NotFound desc = could not find container \"065dd553f16ab9b79618d72a86c86f3480fb5e6489a41cc38b8fc388a4af6d86\": container with ID starting with 065dd553f16ab9b79618d72a86c86f3480fb5e6489a41cc38b8fc388a4af6d86 not found: ID does not exist" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.098616 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.117010 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.130386 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.131902 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.144472 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 17:21:47 crc kubenswrapper[4870]: E0216 17:21:47.165933 4870 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 17:21:47 crc kubenswrapper[4870]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_3ff62ee0-06f4-4abb-90d5-637ae84d61d7_0(a5549d1df30f788cf0301b021c3f89ec9bf2dbb304b9a0fadba0f6f3207da180): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a5549d1df30f788cf0301b021c3f89ec9bf2dbb304b9a0fadba0f6f3207da180" Netns:"/var/run/netns/9c5726f8-49db-41bb-b3dc-9d6859be798c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=a5549d1df30f788cf0301b021c3f89ec9bf2dbb304b9a0fadba0f6f3207da180;K8S_POD_UID=3ff62ee0-06f4-4abb-90d5-637ae84d61d7" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/3ff62ee0-06f4-4abb-90d5-637ae84d61d7]: expected pod UID "3ff62ee0-06f4-4abb-90d5-637ae84d61d7" but got "e5ffb6c2-c33b-4118-985e-52a0e14ba938" from Kube API Feb 16 17:21:47 crc kubenswrapper[4870]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:21:47 crc kubenswrapper[4870]: > Feb 16 17:21:47 crc kubenswrapper[4870]: E0216 17:21:47.166021 4870 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 17:21:47 crc kubenswrapper[4870]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_3ff62ee0-06f4-4abb-90d5-637ae84d61d7_0(a5549d1df30f788cf0301b021c3f89ec9bf2dbb304b9a0fadba0f6f3207da180): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a5549d1df30f788cf0301b021c3f89ec9bf2dbb304b9a0fadba0f6f3207da180" Netns:"/var/run/netns/9c5726f8-49db-41bb-b3dc-9d6859be798c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=a5549d1df30f788cf0301b021c3f89ec9bf2dbb304b9a0fadba0f6f3207da180;K8S_POD_UID=3ff62ee0-06f4-4abb-90d5-637ae84d61d7" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/3ff62ee0-06f4-4abb-90d5-637ae84d61d7]: expected pod UID "3ff62ee0-06f4-4abb-90d5-637ae84d61d7" but got "e5ffb6c2-c33b-4118-985e-52a0e14ba938" from Kube API Feb 16 17:21:47 crc kubenswrapper[4870]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:21:47 crc kubenswrapper[4870]: > pod="openstack/openstackclient" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.235123 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-798bd5bd64-st2b8"] Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.244325 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-798bd5bd64-st2b8"] Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.322268 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5ffb6c2-c33b-4118-985e-52a0e14ba938-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e5ffb6c2-c33b-4118-985e-52a0e14ba938\") " pod="openstack/openstackclient" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.322838 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e5ffb6c2-c33b-4118-985e-52a0e14ba938-openstack-config\") pod \"openstackclient\" (UID: \"e5ffb6c2-c33b-4118-985e-52a0e14ba938\") " pod="openstack/openstackclient" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.323066 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhgn8\" (UniqueName: \"kubernetes.io/projected/e5ffb6c2-c33b-4118-985e-52a0e14ba938-kube-api-access-dhgn8\") pod \"openstackclient\" (UID: \"e5ffb6c2-c33b-4118-985e-52a0e14ba938\") " pod="openstack/openstackclient" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.323155 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e5ffb6c2-c33b-4118-985e-52a0e14ba938-openstack-config-secret\") pod \"openstackclient\" (UID: \"e5ffb6c2-c33b-4118-985e-52a0e14ba938\") " pod="openstack/openstackclient" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.425748 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5ffb6c2-c33b-4118-985e-52a0e14ba938-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e5ffb6c2-c33b-4118-985e-52a0e14ba938\") " pod="openstack/openstackclient" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.425843 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e5ffb6c2-c33b-4118-985e-52a0e14ba938-openstack-config\") pod \"openstackclient\" (UID: \"e5ffb6c2-c33b-4118-985e-52a0e14ba938\") " pod="openstack/openstackclient" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.426075 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhgn8\" (UniqueName: \"kubernetes.io/projected/e5ffb6c2-c33b-4118-985e-52a0e14ba938-kube-api-access-dhgn8\") pod \"openstackclient\" (UID: \"e5ffb6c2-c33b-4118-985e-52a0e14ba938\") " pod="openstack/openstackclient" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.426166 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e5ffb6c2-c33b-4118-985e-52a0e14ba938-openstack-config-secret\") pod \"openstackclient\" (UID: \"e5ffb6c2-c33b-4118-985e-52a0e14ba938\") " pod="openstack/openstackclient" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.427417 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e5ffb6c2-c33b-4118-985e-52a0e14ba938-openstack-config\") pod \"openstackclient\" (UID: \"e5ffb6c2-c33b-4118-985e-52a0e14ba938\") " pod="openstack/openstackclient" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.429668 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5ffb6c2-c33b-4118-985e-52a0e14ba938-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e5ffb6c2-c33b-4118-985e-52a0e14ba938\") " pod="openstack/openstackclient" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.434429 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e5ffb6c2-c33b-4118-985e-52a0e14ba938-openstack-config-secret\") pod \"openstackclient\" (UID: \"e5ffb6c2-c33b-4118-985e-52a0e14ba938\") " pod="openstack/openstackclient" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.453100 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhgn8\" (UniqueName: \"kubernetes.io/projected/e5ffb6c2-c33b-4118-985e-52a0e14ba938-kube-api-access-dhgn8\") pod \"openstackclient\" (UID: \"e5ffb6c2-c33b-4118-985e-52a0e14ba938\") " pod="openstack/openstackclient" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.523154 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.858592 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8f5dc8565-bnkj8" event={"ID":"cc25f232-f484-409d-ac24-fc126dc679d4","Type":"ContainerStarted","Data":"badab7ad1c6f1475d3ad0e2c73bc3d35f2887a54a1fa691899b73539262a598f"} Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.859129 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.859150 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.859160 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8f5dc8565-bnkj8" event={"ID":"cc25f232-f484-409d-ac24-fc126dc679d4","Type":"ContainerStarted","Data":"1e717f33bfa36e8fc7ff1e929414e100c813686aba9aa426853c53144bde229c"} Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.860846 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.873918 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.909350 4870 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="3ff62ee0-06f4-4abb-90d5-637ae84d61d7" podUID="e5ffb6c2-c33b-4118-985e-52a0e14ba938" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.975372 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-8f5dc8565-bnkj8" podStartSLOduration=2.975346678 podStartE2EDuration="2.975346678s" podCreationTimestamp="2026-02-16 17:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:21:47.905960859 +0000 UTC m=+1312.389425243" watchObservedRunningTime="2026-02-16 17:21:47.975346678 +0000 UTC m=+1312.458811062" Feb 16 17:21:47 crc kubenswrapper[4870]: I0216 17:21:47.989499 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.027373 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.027767 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee3935ab-edfc-4f1d-ac06-32e332389334" containerName="ceilometer-central-agent" containerID="cri-o://6a6b50ebfcb8160381df1fdb499884d07874760845a3b7b2641d7b0d204b330b" gracePeriod=30 Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.028595 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee3935ab-edfc-4f1d-ac06-32e332389334" containerName="proxy-httpd" containerID="cri-o://fc51b2ec57ea80cc70aeced8207f04c22aebcd43e593dcd63d5c771b174d20f4" gracePeriod=30 Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.028660 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee3935ab-edfc-4f1d-ac06-32e332389334" containerName="sg-core" containerID="cri-o://aa54c965768dd2b324933144e18df78fa4d3eef471fb68f9c56ebd58545061ce" gracePeriod=30 Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.028703 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee3935ab-edfc-4f1d-ac06-32e332389334" containerName="ceilometer-notification-agent" containerID="cri-o://2215d38d8d240ec90fad20daf7212c423ba48735ed26f8833d4afc8598fb5a86" gracePeriod=30 Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.037670 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhlcg\" (UniqueName: \"kubernetes.io/projected/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-kube-api-access-vhlcg\") pod \"3ff62ee0-06f4-4abb-90d5-637ae84d61d7\" (UID: \"3ff62ee0-06f4-4abb-90d5-637ae84d61d7\") " Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.037745 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-openstack-config-secret\") pod \"3ff62ee0-06f4-4abb-90d5-637ae84d61d7\" (UID: \"3ff62ee0-06f4-4abb-90d5-637ae84d61d7\") " Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.037791 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-combined-ca-bundle\") pod \"3ff62ee0-06f4-4abb-90d5-637ae84d61d7\" (UID: \"3ff62ee0-06f4-4abb-90d5-637ae84d61d7\") " Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.037822 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-openstack-config\") pod \"3ff62ee0-06f4-4abb-90d5-637ae84d61d7\" (UID: \"3ff62ee0-06f4-4abb-90d5-637ae84d61d7\") " Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.040491 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "3ff62ee0-06f4-4abb-90d5-637ae84d61d7" (UID: "3ff62ee0-06f4-4abb-90d5-637ae84d61d7"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.051990 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-kube-api-access-vhlcg" (OuterVolumeSpecName: "kube-api-access-vhlcg") pod "3ff62ee0-06f4-4abb-90d5-637ae84d61d7" (UID: "3ff62ee0-06f4-4abb-90d5-637ae84d61d7"). InnerVolumeSpecName "kube-api-access-vhlcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.052077 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ff62ee0-06f4-4abb-90d5-637ae84d61d7" (UID: "3ff62ee0-06f4-4abb-90d5-637ae84d61d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.055820 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "3ff62ee0-06f4-4abb-90d5-637ae84d61d7" (UID: "3ff62ee0-06f4-4abb-90d5-637ae84d61d7"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.140706 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhlcg\" (UniqueName: \"kubernetes.io/projected/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-kube-api-access-vhlcg\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.141065 4870 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.141081 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.141096 4870 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3ff62ee0-06f4-4abb-90d5-637ae84d61d7-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.270773 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ff62ee0-06f4-4abb-90d5-637ae84d61d7" path="/var/lib/kubelet/pods/3ff62ee0-06f4-4abb-90d5-637ae84d61d7/volumes" Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.272102 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96bd7dab-7469-4449-b1dd-dc57aa17c27c" path="/var/lib/kubelet/pods/96bd7dab-7469-4449-b1dd-dc57aa17c27c/volumes" Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.885009 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e5ffb6c2-c33b-4118-985e-52a0e14ba938","Type":"ContainerStarted","Data":"b1f907d55f46b15ff023d0ab8a0f178026c5e20fdd8b77249441bf873fb1785b"} Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.898939 4870 generic.go:334] "Generic (PLEG): container finished" podID="ee3935ab-edfc-4f1d-ac06-32e332389334" containerID="fc51b2ec57ea80cc70aeced8207f04c22aebcd43e593dcd63d5c771b174d20f4" exitCode=0 Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.899152 4870 generic.go:334] "Generic (PLEG): container finished" podID="ee3935ab-edfc-4f1d-ac06-32e332389334" containerID="aa54c965768dd2b324933144e18df78fa4d3eef471fb68f9c56ebd58545061ce" exitCode=2 Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.899216 4870 generic.go:334] "Generic (PLEG): container finished" podID="ee3935ab-edfc-4f1d-ac06-32e332389334" containerID="6a6b50ebfcb8160381df1fdb499884d07874760845a3b7b2641d7b0d204b330b" exitCode=0 Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.898998 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee3935ab-edfc-4f1d-ac06-32e332389334","Type":"ContainerDied","Data":"fc51b2ec57ea80cc70aeced8207f04c22aebcd43e593dcd63d5c771b174d20f4"} Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.899344 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee3935ab-edfc-4f1d-ac06-32e332389334","Type":"ContainerDied","Data":"aa54c965768dd2b324933144e18df78fa4d3eef471fb68f9c56ebd58545061ce"} Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.899387 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee3935ab-edfc-4f1d-ac06-32e332389334","Type":"ContainerDied","Data":"6a6b50ebfcb8160381df1fdb499884d07874760845a3b7b2641d7b0d204b330b"} Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.899482 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 17:21:48 crc kubenswrapper[4870]: I0216 17:21:48.923358 4870 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="3ff62ee0-06f4-4abb-90d5-637ae84d61d7" podUID="e5ffb6c2-c33b-4118-985e-52a0e14ba938" Feb 16 17:21:49 crc kubenswrapper[4870]: I0216 17:21:49.916634 4870 generic.go:334] "Generic (PLEG): container finished" podID="ee3935ab-edfc-4f1d-ac06-32e332389334" containerID="2215d38d8d240ec90fad20daf7212c423ba48735ed26f8833d4afc8598fb5a86" exitCode=0 Feb 16 17:21:49 crc kubenswrapper[4870]: I0216 17:21:49.917027 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee3935ab-edfc-4f1d-ac06-32e332389334","Type":"ContainerDied","Data":"2215d38d8d240ec90fad20daf7212c423ba48735ed26f8833d4afc8598fb5a86"} Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.208925 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.263151 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dbb74864-cqlt9" podUID="291e56ef-3b45-4a21-875c-f90daaf45e0b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.263156 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dbb74864-cqlt9" podUID="291e56ef-3b45-4a21-875c-f90daaf45e0b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.293935 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfznw\" (UniqueName: \"kubernetes.io/projected/ee3935ab-edfc-4f1d-ac06-32e332389334-kube-api-access-hfznw\") pod \"ee3935ab-edfc-4f1d-ac06-32e332389334\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.294076 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-scripts\") pod \"ee3935ab-edfc-4f1d-ac06-32e332389334\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.294110 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee3935ab-edfc-4f1d-ac06-32e332389334-run-httpd\") pod \"ee3935ab-edfc-4f1d-ac06-32e332389334\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.294139 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-combined-ca-bundle\") pod \"ee3935ab-edfc-4f1d-ac06-32e332389334\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.294198 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-config-data\") pod \"ee3935ab-edfc-4f1d-ac06-32e332389334\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.294257 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee3935ab-edfc-4f1d-ac06-32e332389334-log-httpd\") pod \"ee3935ab-edfc-4f1d-ac06-32e332389334\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.294352 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-sg-core-conf-yaml\") pod \"ee3935ab-edfc-4f1d-ac06-32e332389334\" (UID: \"ee3935ab-edfc-4f1d-ac06-32e332389334\") " Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.294721 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee3935ab-edfc-4f1d-ac06-32e332389334-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ee3935ab-edfc-4f1d-ac06-32e332389334" (UID: "ee3935ab-edfc-4f1d-ac06-32e332389334"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.296189 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee3935ab-edfc-4f1d-ac06-32e332389334-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ee3935ab-edfc-4f1d-ac06-32e332389334" (UID: "ee3935ab-edfc-4f1d-ac06-32e332389334"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.297149 4870 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee3935ab-edfc-4f1d-ac06-32e332389334-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.302661 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee3935ab-edfc-4f1d-ac06-32e332389334-kube-api-access-hfznw" (OuterVolumeSpecName: "kube-api-access-hfznw") pod "ee3935ab-edfc-4f1d-ac06-32e332389334" (UID: "ee3935ab-edfc-4f1d-ac06-32e332389334"). InnerVolumeSpecName "kube-api-access-hfznw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.316736 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-scripts" (OuterVolumeSpecName: "scripts") pod "ee3935ab-edfc-4f1d-ac06-32e332389334" (UID: "ee3935ab-edfc-4f1d-ac06-32e332389334"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.339312 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ee3935ab-edfc-4f1d-ac06-32e332389334" (UID: "ee3935ab-edfc-4f1d-ac06-32e332389334"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.400422 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfznw\" (UniqueName: \"kubernetes.io/projected/ee3935ab-edfc-4f1d-ac06-32e332389334-kube-api-access-hfznw\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.400477 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.400490 4870 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee3935ab-edfc-4f1d-ac06-32e332389334-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.400504 4870 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.416091 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee3935ab-edfc-4f1d-ac06-32e332389334" (UID: "ee3935ab-edfc-4f1d-ac06-32e332389334"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.449481 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-config-data" (OuterVolumeSpecName: "config-data") pod "ee3935ab-edfc-4f1d-ac06-32e332389334" (UID: "ee3935ab-edfc-4f1d-ac06-32e332389334"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.502487 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.502711 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3935ab-edfc-4f1d-ac06-32e332389334-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.592721 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.932746 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee3935ab-edfc-4f1d-ac06-32e332389334","Type":"ContainerDied","Data":"aa9439519d0995bd2925ed5ed08eba7823de537ba848fc6ebe415824b713bf59"} Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.932796 4870 scope.go:117] "RemoveContainer" containerID="fc51b2ec57ea80cc70aeced8207f04c22aebcd43e593dcd63d5c771b174d20f4" Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.932961 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.982270 4870 scope.go:117] "RemoveContainer" containerID="aa54c965768dd2b324933144e18df78fa4d3eef471fb68f9c56ebd58545061ce" Feb 16 17:21:50 crc kubenswrapper[4870]: I0216 17:21:50.985784 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.006027 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.018141 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:21:51 crc kubenswrapper[4870]: E0216 17:21:51.018656 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3935ab-edfc-4f1d-ac06-32e332389334" containerName="proxy-httpd" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.018678 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3935ab-edfc-4f1d-ac06-32e332389334" containerName="proxy-httpd" Feb 16 17:21:51 crc kubenswrapper[4870]: E0216 17:21:51.018693 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3935ab-edfc-4f1d-ac06-32e332389334" containerName="sg-core" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.018700 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3935ab-edfc-4f1d-ac06-32e332389334" containerName="sg-core" Feb 16 17:21:51 crc kubenswrapper[4870]: E0216 17:21:51.018716 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3935ab-edfc-4f1d-ac06-32e332389334" containerName="ceilometer-notification-agent" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.018725 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3935ab-edfc-4f1d-ac06-32e332389334" containerName="ceilometer-notification-agent" Feb 16 17:21:51 crc kubenswrapper[4870]: E0216 17:21:51.018744 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3935ab-edfc-4f1d-ac06-32e332389334" containerName="ceilometer-central-agent" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.018750 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3935ab-edfc-4f1d-ac06-32e332389334" containerName="ceilometer-central-agent" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.018962 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee3935ab-edfc-4f1d-ac06-32e332389334" containerName="proxy-httpd" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.018975 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee3935ab-edfc-4f1d-ac06-32e332389334" containerName="ceilometer-notification-agent" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.018988 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee3935ab-edfc-4f1d-ac06-32e332389334" containerName="ceilometer-central-agent" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.018998 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee3935ab-edfc-4f1d-ac06-32e332389334" containerName="sg-core" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.021278 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.024980 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.026141 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.030236 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.034748 4870 scope.go:117] "RemoveContainer" containerID="2215d38d8d240ec90fad20daf7212c423ba48735ed26f8833d4afc8598fb5a86" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.071851 4870 scope.go:117] "RemoveContainer" containerID="6a6b50ebfcb8160381df1fdb499884d07874760845a3b7b2641d7b0d204b330b" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.117664 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4675\" (UniqueName: \"kubernetes.io/projected/888818e1-6d0c-455f-85ad-021dd68c2510-kube-api-access-t4675\") pod \"ceilometer-0\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.117844 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-config-data\") pod \"ceilometer-0\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.117899 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/888818e1-6d0c-455f-85ad-021dd68c2510-log-httpd\") pod \"ceilometer-0\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.118210 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.118297 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-scripts\") pod \"ceilometer-0\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.118436 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.118533 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/888818e1-6d0c-455f-85ad-021dd68c2510-run-httpd\") pod \"ceilometer-0\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.220853 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4675\" (UniqueName: \"kubernetes.io/projected/888818e1-6d0c-455f-85ad-021dd68c2510-kube-api-access-t4675\") pod \"ceilometer-0\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.220934 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-config-data\") pod \"ceilometer-0\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.220979 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/888818e1-6d0c-455f-85ad-021dd68c2510-log-httpd\") pod \"ceilometer-0\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.221087 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.221127 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-scripts\") pod \"ceilometer-0\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.221172 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.221213 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/888818e1-6d0c-455f-85ad-021dd68c2510-run-httpd\") pod \"ceilometer-0\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.221865 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/888818e1-6d0c-455f-85ad-021dd68c2510-log-httpd\") pod \"ceilometer-0\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.221868 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/888818e1-6d0c-455f-85ad-021dd68c2510-run-httpd\") pod \"ceilometer-0\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: E0216 17:21:51.224128 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.226101 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-config-data\") pod \"ceilometer-0\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.227323 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.227488 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-scripts\") pod \"ceilometer-0\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.241657 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.243437 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4675\" (UniqueName: \"kubernetes.io/projected/888818e1-6d0c-455f-85ad-021dd68c2510-kube-api-access-t4675\") pod \"ceilometer-0\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.350500 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.881686 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:21:51 crc kubenswrapper[4870]: W0216 17:21:51.907176 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod888818e1_6d0c_455f_85ad_021dd68c2510.slice/crio-b3f77a003ab83db30e3dff292ac696a554b4d9d55497f1606a110718ed882b6c WatchSource:0}: Error finding container b3f77a003ab83db30e3dff292ac696a554b4d9d55497f1606a110718ed882b6c: Status 404 returned error can't find the container with id b3f77a003ab83db30e3dff292ac696a554b4d9d55497f1606a110718ed882b6c Feb 16 17:21:51 crc kubenswrapper[4870]: I0216 17:21:51.955045 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"888818e1-6d0c-455f-85ad-021dd68c2510","Type":"ContainerStarted","Data":"b3f77a003ab83db30e3dff292ac696a554b4d9d55497f1606a110718ed882b6c"} Feb 16 17:21:52 crc kubenswrapper[4870]: I0216 17:21:52.238294 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee3935ab-edfc-4f1d-ac06-32e332389334" path="/var/lib/kubelet/pods/ee3935ab-edfc-4f1d-ac06-32e332389334/volumes" Feb 16 17:21:52 crc kubenswrapper[4870]: I0216 17:21:52.783216 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-67bf48b897-78ftj" Feb 16 17:21:52 crc kubenswrapper[4870]: I0216 17:21:52.857963 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-686cd77f6d-7xrcx"] Feb 16 17:21:52 crc kubenswrapper[4870]: I0216 17:21:52.858247 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-686cd77f6d-7xrcx" podUID="0de98d85-29b8-44b5-b120-72c0c42e4714" containerName="neutron-api" containerID="cri-o://dc694537ffbad444d7679dfda4dd89d7882b89b56e2c3de4fd663fbd4021d6cd" gracePeriod=30 Feb 16 17:21:52 crc kubenswrapper[4870]: I0216 17:21:52.858611 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-686cd77f6d-7xrcx" podUID="0de98d85-29b8-44b5-b120-72c0c42e4714" containerName="neutron-httpd" containerID="cri-o://5dd6804a265c7e338bf704f744968ed596cdcd5ce460d8721e17916ef0e10370" gracePeriod=30 Feb 16 17:21:52 crc kubenswrapper[4870]: I0216 17:21:52.990898 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"888818e1-6d0c-455f-85ad-021dd68c2510","Type":"ContainerStarted","Data":"2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26"} Feb 16 17:21:53 crc kubenswrapper[4870]: I0216 17:21:53.874532 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:21:53 crc kubenswrapper[4870]: I0216 17:21:53.874766 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="00a8e7ab-716d-408e-a531-c49194dca35c" containerName="glance-log" containerID="cri-o://240a7f8cc7a6f44e4eed6515d3df1dd9504c59fa8f55469dcb1496592e3a477d" gracePeriod=30 Feb 16 17:21:53 crc kubenswrapper[4870]: I0216 17:21:53.874911 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="00a8e7ab-716d-408e-a531-c49194dca35c" containerName="glance-httpd" containerID="cri-o://dc7e3d4994866d266c53cd63755dd24b12c7fa513b569229e8dcc1e8bca43594" gracePeriod=30 Feb 16 17:21:54 crc kubenswrapper[4870]: I0216 17:21:54.010356 4870 generic.go:334] "Generic (PLEG): container finished" podID="00a8e7ab-716d-408e-a531-c49194dca35c" containerID="240a7f8cc7a6f44e4eed6515d3df1dd9504c59fa8f55469dcb1496592e3a477d" exitCode=143 Feb 16 17:21:54 crc kubenswrapper[4870]: I0216 17:21:54.010418 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00a8e7ab-716d-408e-a531-c49194dca35c","Type":"ContainerDied","Data":"240a7f8cc7a6f44e4eed6515d3df1dd9504c59fa8f55469dcb1496592e3a477d"} Feb 16 17:21:54 crc kubenswrapper[4870]: I0216 17:21:54.012758 4870 generic.go:334] "Generic (PLEG): container finished" podID="0de98d85-29b8-44b5-b120-72c0c42e4714" containerID="5dd6804a265c7e338bf704f744968ed596cdcd5ce460d8721e17916ef0e10370" exitCode=0 Feb 16 17:21:54 crc kubenswrapper[4870]: I0216 17:21:54.012792 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-686cd77f6d-7xrcx" event={"ID":"0de98d85-29b8-44b5-b120-72c0c42e4714","Type":"ContainerDied","Data":"5dd6804a265c7e338bf704f744968ed596cdcd5ce460d8721e17916ef0e10370"} Feb 16 17:21:55 crc kubenswrapper[4870]: I0216 17:21:55.673101 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:21:56 crc kubenswrapper[4870]: I0216 17:21:56.012550 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:56 crc kubenswrapper[4870]: I0216 17:21:56.014379 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-8f5dc8565-bnkj8" Feb 16 17:21:56 crc kubenswrapper[4870]: I0216 17:21:56.340800 4870 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod887feada-bbae-4e0a-bb20-a1e29b65cef9"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod887feada-bbae-4e0a-bb20-a1e29b65cef9] : Timed out while waiting for systemd to remove kubepods-besteffort-pod887feada_bbae_4e0a_bb20_a1e29b65cef9.slice" Feb 16 17:21:56 crc kubenswrapper[4870]: E0216 17:21:56.341149 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod887feada-bbae-4e0a-bb20-a1e29b65cef9] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod887feada-bbae-4e0a-bb20-a1e29b65cef9] : Timed out while waiting for systemd to remove kubepods-besteffort-pod887feada_bbae_4e0a_bb20_a1e29b65cef9.slice" pod="openstack/barbican-worker-89df85dc-88tt5" podUID="887feada-bbae-4e0a-bb20-a1e29b65cef9" Feb 16 17:21:57 crc kubenswrapper[4870]: I0216 17:21:57.058189 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-89df85dc-88tt5" Feb 16 17:21:57 crc kubenswrapper[4870]: I0216 17:21:57.088587 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-89df85dc-88tt5"] Feb 16 17:21:57 crc kubenswrapper[4870]: I0216 17:21:57.098898 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-89df85dc-88tt5"] Feb 16 17:21:58 crc kubenswrapper[4870]: I0216 17:21:58.074646 4870 generic.go:334] "Generic (PLEG): container finished" podID="00a8e7ab-716d-408e-a531-c49194dca35c" containerID="dc7e3d4994866d266c53cd63755dd24b12c7fa513b569229e8dcc1e8bca43594" exitCode=0 Feb 16 17:21:58 crc kubenswrapper[4870]: I0216 17:21:58.074723 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00a8e7ab-716d-408e-a531-c49194dca35c","Type":"ContainerDied","Data":"dc7e3d4994866d266c53cd63755dd24b12c7fa513b569229e8dcc1e8bca43594"} Feb 16 17:21:58 crc kubenswrapper[4870]: I0216 17:21:58.078111 4870 generic.go:334] "Generic (PLEG): container finished" podID="0de98d85-29b8-44b5-b120-72c0c42e4714" containerID="dc694537ffbad444d7679dfda4dd89d7882b89b56e2c3de4fd663fbd4021d6cd" exitCode=0 Feb 16 17:21:58 crc kubenswrapper[4870]: I0216 17:21:58.078141 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-686cd77f6d-7xrcx" event={"ID":"0de98d85-29b8-44b5-b120-72c0c42e4714","Type":"ContainerDied","Data":"dc694537ffbad444d7679dfda4dd89d7882b89b56e2c3de4fd663fbd4021d6cd"} Feb 16 17:21:58 crc kubenswrapper[4870]: I0216 17:21:58.238897 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="887feada-bbae-4e0a-bb20-a1e29b65cef9" path="/var/lib/kubelet/pods/887feada-bbae-4e0a-bb20-a1e29b65cef9/volumes" Feb 16 17:21:58 crc kubenswrapper[4870]: I0216 17:21:58.994819 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:21:59 crc kubenswrapper[4870]: I0216 17:21:59.592730 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-686cd77f6d-7xrcx" Feb 16 17:21:59 crc kubenswrapper[4870]: I0216 17:21:59.620897 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-ovndb-tls-certs\") pod \"0de98d85-29b8-44b5-b120-72c0c42e4714\" (UID: \"0de98d85-29b8-44b5-b120-72c0c42e4714\") " Feb 16 17:21:59 crc kubenswrapper[4870]: I0216 17:21:59.621026 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-combined-ca-bundle\") pod \"0de98d85-29b8-44b5-b120-72c0c42e4714\" (UID: \"0de98d85-29b8-44b5-b120-72c0c42e4714\") " Feb 16 17:21:59 crc kubenswrapper[4870]: I0216 17:21:59.621087 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-httpd-config\") pod \"0de98d85-29b8-44b5-b120-72c0c42e4714\" (UID: \"0de98d85-29b8-44b5-b120-72c0c42e4714\") " Feb 16 17:21:59 crc kubenswrapper[4870]: I0216 17:21:59.621165 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwrk4\" (UniqueName: \"kubernetes.io/projected/0de98d85-29b8-44b5-b120-72c0c42e4714-kube-api-access-wwrk4\") pod \"0de98d85-29b8-44b5-b120-72c0c42e4714\" (UID: \"0de98d85-29b8-44b5-b120-72c0c42e4714\") " Feb 16 17:21:59 crc kubenswrapper[4870]: I0216 17:21:59.621289 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-config\") pod \"0de98d85-29b8-44b5-b120-72c0c42e4714\" (UID: \"0de98d85-29b8-44b5-b120-72c0c42e4714\") " Feb 16 17:21:59 crc kubenswrapper[4870]: I0216 17:21:59.625487 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0de98d85-29b8-44b5-b120-72c0c42e4714" (UID: "0de98d85-29b8-44b5-b120-72c0c42e4714"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:59 crc kubenswrapper[4870]: I0216 17:21:59.625512 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0de98d85-29b8-44b5-b120-72c0c42e4714-kube-api-access-wwrk4" (OuterVolumeSpecName: "kube-api-access-wwrk4") pod "0de98d85-29b8-44b5-b120-72c0c42e4714" (UID: "0de98d85-29b8-44b5-b120-72c0c42e4714"). InnerVolumeSpecName "kube-api-access-wwrk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:21:59 crc kubenswrapper[4870]: I0216 17:21:59.687684 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-config" (OuterVolumeSpecName: "config") pod "0de98d85-29b8-44b5-b120-72c0c42e4714" (UID: "0de98d85-29b8-44b5-b120-72c0c42e4714"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:59 crc kubenswrapper[4870]: I0216 17:21:59.701315 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0de98d85-29b8-44b5-b120-72c0c42e4714" (UID: "0de98d85-29b8-44b5-b120-72c0c42e4714"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:59 crc kubenswrapper[4870]: I0216 17:21:59.710958 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0de98d85-29b8-44b5-b120-72c0c42e4714" (UID: "0de98d85-29b8-44b5-b120-72c0c42e4714"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:21:59 crc kubenswrapper[4870]: I0216 17:21:59.724883 4870 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:59 crc kubenswrapper[4870]: I0216 17:21:59.724920 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:59 crc kubenswrapper[4870]: I0216 17:21:59.724934 4870 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:59 crc kubenswrapper[4870]: I0216 17:21:59.724963 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwrk4\" (UniqueName: \"kubernetes.io/projected/0de98d85-29b8-44b5-b120-72c0c42e4714-kube-api-access-wwrk4\") on node \"crc\" DevicePath \"\"" Feb 16 17:21:59 crc kubenswrapper[4870]: I0216 17:21:59.724982 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0de98d85-29b8-44b5-b120-72c0c42e4714-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.071218 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="2f8f68af-963d-41b6-89a4-1448670e187e" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.188:8776/healthcheck\": EOF" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.106017 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-686cd77f6d-7xrcx" event={"ID":"0de98d85-29b8-44b5-b120-72c0c42e4714","Type":"ContainerDied","Data":"0ec551d28975d88a0487efcaac2b828708eda530e96b131da31719005a4fffed"} Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.106265 4870 scope.go:117] "RemoveContainer" containerID="5dd6804a265c7e338bf704f744968ed596cdcd5ce460d8721e17916ef0e10370" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.106222 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-686cd77f6d-7xrcx" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.108737 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"888818e1-6d0c-455f-85ad-021dd68c2510","Type":"ContainerStarted","Data":"ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a"} Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.115332 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e5ffb6c2-c33b-4118-985e-52a0e14ba938","Type":"ContainerStarted","Data":"a87c3187d40a7adb685bfdd09ac98ad2b51784a8c0bb2f6a03e114c4b54dbd21"} Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.138688 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.922184774 podStartE2EDuration="13.138668795s" podCreationTimestamp="2026-02-16 17:21:47 +0000 UTC" firstStartedPulling="2026-02-16 17:21:48.005728555 +0000 UTC m=+1312.489192939" lastFinishedPulling="2026-02-16 17:21:59.222212576 +0000 UTC m=+1323.705676960" observedRunningTime="2026-02-16 17:22:00.133905769 +0000 UTC m=+1324.617370153" watchObservedRunningTime="2026-02-16 17:22:00.138668795 +0000 UTC m=+1324.622133179" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.173494 4870 scope.go:117] "RemoveContainer" containerID="dc694537ffbad444d7679dfda4dd89d7882b89b56e2c3de4fd663fbd4021d6cd" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.183565 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-686cd77f6d-7xrcx"] Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.196119 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-686cd77f6d-7xrcx"] Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.240251 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0de98d85-29b8-44b5-b120-72c0c42e4714" path="/var/lib/kubelet/pods/0de98d85-29b8-44b5-b120-72c0c42e4714/volumes" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.753103 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.844982 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.846557 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-config-data-custom\") pod \"2f8f68af-963d-41b6-89a4-1448670e187e\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.846619 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-combined-ca-bundle\") pod \"2f8f68af-963d-41b6-89a4-1448670e187e\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.846682 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-config-data\") pod \"2f8f68af-963d-41b6-89a4-1448670e187e\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.846730 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-scripts\") pod \"2f8f68af-963d-41b6-89a4-1448670e187e\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.846768 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f8f68af-963d-41b6-89a4-1448670e187e-logs\") pod \"2f8f68af-963d-41b6-89a4-1448670e187e\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.846878 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f8f68af-963d-41b6-89a4-1448670e187e-etc-machine-id\") pod \"2f8f68af-963d-41b6-89a4-1448670e187e\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.847011 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f8f68af-963d-41b6-89a4-1448670e187e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2f8f68af-963d-41b6-89a4-1448670e187e" (UID: "2f8f68af-963d-41b6-89a4-1448670e187e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.846924 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g627k\" (UniqueName: \"kubernetes.io/projected/2f8f68af-963d-41b6-89a4-1448670e187e-kube-api-access-g627k\") pod \"2f8f68af-963d-41b6-89a4-1448670e187e\" (UID: \"2f8f68af-963d-41b6-89a4-1448670e187e\") " Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.847258 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f8f68af-963d-41b6-89a4-1448670e187e-logs" (OuterVolumeSpecName: "logs") pod "2f8f68af-963d-41b6-89a4-1448670e187e" (UID: "2f8f68af-963d-41b6-89a4-1448670e187e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.847730 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f8f68af-963d-41b6-89a4-1448670e187e-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.847751 4870 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f8f68af-963d-41b6-89a4-1448670e187e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.855172 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-scripts" (OuterVolumeSpecName: "scripts") pod "2f8f68af-963d-41b6-89a4-1448670e187e" (UID: "2f8f68af-963d-41b6-89a4-1448670e187e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.855247 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f8f68af-963d-41b6-89a4-1448670e187e-kube-api-access-g627k" (OuterVolumeSpecName: "kube-api-access-g627k") pod "2f8f68af-963d-41b6-89a4-1448670e187e" (UID: "2f8f68af-963d-41b6-89a4-1448670e187e"). InnerVolumeSpecName "kube-api-access-g627k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.866101 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2f8f68af-963d-41b6-89a4-1448670e187e" (UID: "2f8f68af-963d-41b6-89a4-1448670e187e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.907170 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f8f68af-963d-41b6-89a4-1448670e187e" (UID: "2f8f68af-963d-41b6-89a4-1448670e187e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.950782 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00a8e7ab-716d-408e-a531-c49194dca35c-httpd-run\") pod \"00a8e7ab-716d-408e-a531-c49194dca35c\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.950898 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-internal-tls-certs\") pod \"00a8e7ab-716d-408e-a531-c49194dca35c\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.950929 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-combined-ca-bundle\") pod \"00a8e7ab-716d-408e-a531-c49194dca35c\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.951019 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-config-data\") pod \"00a8e7ab-716d-408e-a531-c49194dca35c\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.951046 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-scripts\") pod \"00a8e7ab-716d-408e-a531-c49194dca35c\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.951121 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5nhj\" (UniqueName: \"kubernetes.io/projected/00a8e7ab-716d-408e-a531-c49194dca35c-kube-api-access-g5nhj\") pod \"00a8e7ab-716d-408e-a531-c49194dca35c\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.951272 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae\") pod \"00a8e7ab-716d-408e-a531-c49194dca35c\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.951297 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00a8e7ab-716d-408e-a531-c49194dca35c-logs\") pod \"00a8e7ab-716d-408e-a531-c49194dca35c\" (UID: \"00a8e7ab-716d-408e-a531-c49194dca35c\") " Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.951446 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00a8e7ab-716d-408e-a531-c49194dca35c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "00a8e7ab-716d-408e-a531-c49194dca35c" (UID: "00a8e7ab-716d-408e-a531-c49194dca35c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.951757 4870 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00a8e7ab-716d-408e-a531-c49194dca35c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.951774 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.951785 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.951795 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g627k\" (UniqueName: \"kubernetes.io/projected/2f8f68af-963d-41b6-89a4-1448670e187e-kube-api-access-g627k\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.951810 4870 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.953297 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00a8e7ab-716d-408e-a531-c49194dca35c-logs" (OuterVolumeSpecName: "logs") pod "00a8e7ab-716d-408e-a531-c49194dca35c" (UID: "00a8e7ab-716d-408e-a531-c49194dca35c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.956146 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00a8e7ab-716d-408e-a531-c49194dca35c-kube-api-access-g5nhj" (OuterVolumeSpecName: "kube-api-access-g5nhj") pod "00a8e7ab-716d-408e-a531-c49194dca35c" (UID: "00a8e7ab-716d-408e-a531-c49194dca35c"). InnerVolumeSpecName "kube-api-access-g5nhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.960618 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-scripts" (OuterVolumeSpecName: "scripts") pod "00a8e7ab-716d-408e-a531-c49194dca35c" (UID: "00a8e7ab-716d-408e-a531-c49194dca35c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.963631 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-config-data" (OuterVolumeSpecName: "config-data") pod "2f8f68af-963d-41b6-89a4-1448670e187e" (UID: "2f8f68af-963d-41b6-89a4-1448670e187e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.980090 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae" (OuterVolumeSpecName: "glance") pod "00a8e7ab-716d-408e-a531-c49194dca35c" (UID: "00a8e7ab-716d-408e-a531-c49194dca35c"). InnerVolumeSpecName "pvc-00d7366f-6279-474d-83f9-372df7eb27ae". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:22:00 crc kubenswrapper[4870]: I0216 17:22:00.999274 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00a8e7ab-716d-408e-a531-c49194dca35c" (UID: "00a8e7ab-716d-408e-a531-c49194dca35c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.038374 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-config-data" (OuterVolumeSpecName: "config-data") pod "00a8e7ab-716d-408e-a531-c49194dca35c" (UID: "00a8e7ab-716d-408e-a531-c49194dca35c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.054510 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.054548 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.054560 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5nhj\" (UniqueName: \"kubernetes.io/projected/00a8e7ab-716d-408e-a531-c49194dca35c-kube-api-access-g5nhj\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.054583 4870 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-00d7366f-6279-474d-83f9-372df7eb27ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae\") on node \"crc\" " Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.054595 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00a8e7ab-716d-408e-a531-c49194dca35c-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.054605 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8f68af-963d-41b6-89a4-1448670e187e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.054613 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.076257 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "00a8e7ab-716d-408e-a531-c49194dca35c" (UID: "00a8e7ab-716d-408e-a531-c49194dca35c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.099810 4870 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.100048 4870 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-00d7366f-6279-474d-83f9-372df7eb27ae" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae") on node "crc" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.154714 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00a8e7ab-716d-408e-a531-c49194dca35c","Type":"ContainerDied","Data":"4cc30dd8003bd348bd90b11cd0f67de61cfe69a43b716226e149cbce32494d8b"} Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.154774 4870 scope.go:117] "RemoveContainer" containerID="dc7e3d4994866d266c53cd63755dd24b12c7fa513b569229e8dcc1e8bca43594" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.154888 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.162726 4870 reconciler_common.go:293] "Volume detached for volume \"pvc-00d7366f-6279-474d-83f9-372df7eb27ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.162825 4870 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00a8e7ab-716d-408e-a531-c49194dca35c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.180919 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"888818e1-6d0c-455f-85ad-021dd68c2510","Type":"ContainerStarted","Data":"8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87"} Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.220556 4870 generic.go:334] "Generic (PLEG): container finished" podID="2f8f68af-963d-41b6-89a4-1448670e187e" containerID="a841874dc8c9d028f0a34a64cc96f8825fb763b28ba2f94eb9acf96947519388" exitCode=137 Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.222499 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2f8f68af-963d-41b6-89a4-1448670e187e","Type":"ContainerDied","Data":"a841874dc8c9d028f0a34a64cc96f8825fb763b28ba2f94eb9acf96947519388"} Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.222548 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2f8f68af-963d-41b6-89a4-1448670e187e","Type":"ContainerDied","Data":"195eb0cf50319ef4db4e5a4d936223859ff01977a246fea6bc7ece908915462e"} Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.223420 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.249327 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.256184 4870 scope.go:117] "RemoveContainer" containerID="240a7f8cc7a6f44e4eed6515d3df1dd9504c59fa8f55469dcb1496592e3a477d" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.266919 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.296178 4870 scope.go:117] "RemoveContainer" containerID="a841874dc8c9d028f0a34a64cc96f8825fb763b28ba2f94eb9acf96947519388" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.305914 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:22:01 crc kubenswrapper[4870]: E0216 17:22:01.306365 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00a8e7ab-716d-408e-a531-c49194dca35c" containerName="glance-log" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.306378 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="00a8e7ab-716d-408e-a531-c49194dca35c" containerName="glance-log" Feb 16 17:22:01 crc kubenswrapper[4870]: E0216 17:22:01.306390 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de98d85-29b8-44b5-b120-72c0c42e4714" containerName="neutron-api" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.306396 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de98d85-29b8-44b5-b120-72c0c42e4714" containerName="neutron-api" Feb 16 17:22:01 crc kubenswrapper[4870]: E0216 17:22:01.306414 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8f68af-963d-41b6-89a4-1448670e187e" containerName="cinder-api-log" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.306420 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8f68af-963d-41b6-89a4-1448670e187e" containerName="cinder-api-log" Feb 16 17:22:01 crc kubenswrapper[4870]: E0216 17:22:01.306431 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8f68af-963d-41b6-89a4-1448670e187e" containerName="cinder-api" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.306438 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8f68af-963d-41b6-89a4-1448670e187e" containerName="cinder-api" Feb 16 17:22:01 crc kubenswrapper[4870]: E0216 17:22:01.306449 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de98d85-29b8-44b5-b120-72c0c42e4714" containerName="neutron-httpd" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.306455 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de98d85-29b8-44b5-b120-72c0c42e4714" containerName="neutron-httpd" Feb 16 17:22:01 crc kubenswrapper[4870]: E0216 17:22:01.306477 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00a8e7ab-716d-408e-a531-c49194dca35c" containerName="glance-httpd" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.306482 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="00a8e7ab-716d-408e-a531-c49194dca35c" containerName="glance-httpd" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.306657 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f8f68af-963d-41b6-89a4-1448670e187e" containerName="cinder-api-log" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.306667 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f8f68af-963d-41b6-89a4-1448670e187e" containerName="cinder-api" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.306673 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="0de98d85-29b8-44b5-b120-72c0c42e4714" containerName="neutron-api" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.306689 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="00a8e7ab-716d-408e-a531-c49194dca35c" containerName="glance-log" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.306717 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="00a8e7ab-716d-408e-a531-c49194dca35c" containerName="glance-httpd" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.306726 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="0de98d85-29b8-44b5-b120-72c0c42e4714" containerName="neutron-httpd" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.307892 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.312241 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.312272 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.331616 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.345557 4870 scope.go:117] "RemoveContainer" containerID="ef178f4ef20b90d82b9caa259bec736cfac006315173943090567acb2c7d4906" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.351335 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.365067 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.368242 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66aea147-403d-4f20-837e-2a492e54cb60-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.368299 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66aea147-403d-4f20-837e-2a492e54cb60-logs\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.368341 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66aea147-403d-4f20-837e-2a492e54cb60-scripts\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.368374 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66aea147-403d-4f20-837e-2a492e54cb60-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.368396 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-00d7366f-6279-474d-83f9-372df7eb27ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.368439 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66aea147-403d-4f20-837e-2a492e54cb60-config-data\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.368456 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/66aea147-403d-4f20-837e-2a492e54cb60-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.368471 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxn6h\" (UniqueName: \"kubernetes.io/projected/66aea147-403d-4f20-837e-2a492e54cb60-kube-api-access-lxn6h\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.372901 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.374794 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.376850 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.377047 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.377077 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.387206 4870 scope.go:117] "RemoveContainer" containerID="a841874dc8c9d028f0a34a64cc96f8825fb763b28ba2f94eb9acf96947519388" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.387292 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 17:22:01 crc kubenswrapper[4870]: E0216 17:22:01.388462 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a841874dc8c9d028f0a34a64cc96f8825fb763b28ba2f94eb9acf96947519388\": container with ID starting with a841874dc8c9d028f0a34a64cc96f8825fb763b28ba2f94eb9acf96947519388 not found: ID does not exist" containerID="a841874dc8c9d028f0a34a64cc96f8825fb763b28ba2f94eb9acf96947519388" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.388507 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a841874dc8c9d028f0a34a64cc96f8825fb763b28ba2f94eb9acf96947519388"} err="failed to get container status \"a841874dc8c9d028f0a34a64cc96f8825fb763b28ba2f94eb9acf96947519388\": rpc error: code = NotFound desc = could not find container \"a841874dc8c9d028f0a34a64cc96f8825fb763b28ba2f94eb9acf96947519388\": container with ID starting with a841874dc8c9d028f0a34a64cc96f8825fb763b28ba2f94eb9acf96947519388 not found: ID does not exist" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.388532 4870 scope.go:117] "RemoveContainer" containerID="ef178f4ef20b90d82b9caa259bec736cfac006315173943090567acb2c7d4906" Feb 16 17:22:01 crc kubenswrapper[4870]: E0216 17:22:01.388994 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef178f4ef20b90d82b9caa259bec736cfac006315173943090567acb2c7d4906\": container with ID starting with ef178f4ef20b90d82b9caa259bec736cfac006315173943090567acb2c7d4906 not found: ID does not exist" containerID="ef178f4ef20b90d82b9caa259bec736cfac006315173943090567acb2c7d4906" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.389023 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef178f4ef20b90d82b9caa259bec736cfac006315173943090567acb2c7d4906"} err="failed to get container status \"ef178f4ef20b90d82b9caa259bec736cfac006315173943090567acb2c7d4906\": rpc error: code = NotFound desc = could not find container \"ef178f4ef20b90d82b9caa259bec736cfac006315173943090567acb2c7d4906\": container with ID starting with ef178f4ef20b90d82b9caa259bec736cfac006315173943090567acb2c7d4906 not found: ID does not exist" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.473737 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46039c25-14ee-4091-8b9c-8bddcd95d44f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.473778 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66aea147-403d-4f20-837e-2a492e54cb60-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.473797 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46039c25-14ee-4091-8b9c-8bddcd95d44f-config-data-custom\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.473842 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66aea147-403d-4f20-837e-2a492e54cb60-logs\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.473885 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66aea147-403d-4f20-837e-2a492e54cb60-scripts\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.473903 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46039c25-14ee-4091-8b9c-8bddcd95d44f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.473924 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46039c25-14ee-4091-8b9c-8bddcd95d44f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.473965 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpt2b\" (UniqueName: \"kubernetes.io/projected/46039c25-14ee-4091-8b9c-8bddcd95d44f-kube-api-access-mpt2b\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.473989 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66aea147-403d-4f20-837e-2a492e54cb60-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.474013 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-00d7366f-6279-474d-83f9-372df7eb27ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.474057 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46039c25-14ee-4091-8b9c-8bddcd95d44f-config-data\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.474074 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66aea147-403d-4f20-837e-2a492e54cb60-config-data\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.474092 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/66aea147-403d-4f20-837e-2a492e54cb60-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.474107 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46039c25-14ee-4091-8b9c-8bddcd95d44f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.474126 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxn6h\" (UniqueName: \"kubernetes.io/projected/66aea147-403d-4f20-837e-2a492e54cb60-kube-api-access-lxn6h\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.474144 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46039c25-14ee-4091-8b9c-8bddcd95d44f-scripts\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.474179 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46039c25-14ee-4091-8b9c-8bddcd95d44f-logs\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.476340 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/66aea147-403d-4f20-837e-2a492e54cb60-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.476610 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66aea147-403d-4f20-837e-2a492e54cb60-logs\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.478407 4870 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.478450 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-00d7366f-6279-474d-83f9-372df7eb27ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3ef4ca648708bfc498381351edc504adf309a14cbb60cff0c6075d7e8a16e973/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.480469 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66aea147-403d-4f20-837e-2a492e54cb60-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.482527 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66aea147-403d-4f20-837e-2a492e54cb60-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.482848 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66aea147-403d-4f20-837e-2a492e54cb60-scripts\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.486090 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66aea147-403d-4f20-837e-2a492e54cb60-config-data\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.505613 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxn6h\" (UniqueName: \"kubernetes.io/projected/66aea147-403d-4f20-837e-2a492e54cb60-kube-api-access-lxn6h\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.546090 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.546607 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5dccdc97-f78d-4a2e-9e18-4956fe9fc535" containerName="glance-log" containerID="cri-o://174b76b80058494b81583b695257ebdd50654383978093ce2c861c57bc68f5cb" gracePeriod=30 Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.546740 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5dccdc97-f78d-4a2e-9e18-4956fe9fc535" containerName="glance-httpd" containerID="cri-o://d4a81ce2df993166d092df61d9ef89ce07467e6e0f69904c8303f8d50d8733c2" gracePeriod=30 Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.571686 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-00d7366f-6279-474d-83f9-372df7eb27ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00d7366f-6279-474d-83f9-372df7eb27ae\") pod \"glance-default-internal-api-0\" (UID: \"66aea147-403d-4f20-837e-2a492e54cb60\") " pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.575596 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46039c25-14ee-4091-8b9c-8bddcd95d44f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.575708 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46039c25-14ee-4091-8b9c-8bddcd95d44f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.576567 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46039c25-14ee-4091-8b9c-8bddcd95d44f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.576740 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpt2b\" (UniqueName: \"kubernetes.io/projected/46039c25-14ee-4091-8b9c-8bddcd95d44f-kube-api-access-mpt2b\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.577488 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46039c25-14ee-4091-8b9c-8bddcd95d44f-config-data\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.578151 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46039c25-14ee-4091-8b9c-8bddcd95d44f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.578226 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46039c25-14ee-4091-8b9c-8bddcd95d44f-scripts\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.578426 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46039c25-14ee-4091-8b9c-8bddcd95d44f-logs\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.578564 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46039c25-14ee-4091-8b9c-8bddcd95d44f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.578600 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46039c25-14ee-4091-8b9c-8bddcd95d44f-config-data-custom\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.580680 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46039c25-14ee-4091-8b9c-8bddcd95d44f-logs\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.582396 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46039c25-14ee-4091-8b9c-8bddcd95d44f-config-data\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.584293 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46039c25-14ee-4091-8b9c-8bddcd95d44f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.584510 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46039c25-14ee-4091-8b9c-8bddcd95d44f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.587697 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46039c25-14ee-4091-8b9c-8bddcd95d44f-scripts\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.589681 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46039c25-14ee-4091-8b9c-8bddcd95d44f-config-data-custom\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.599719 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpt2b\" (UniqueName: \"kubernetes.io/projected/46039c25-14ee-4091-8b9c-8bddcd95d44f-kube-api-access-mpt2b\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.600784 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46039c25-14ee-4091-8b9c-8bddcd95d44f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"46039c25-14ee-4091-8b9c-8bddcd95d44f\") " pod="openstack/cinder-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.654804 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 17:22:01 crc kubenswrapper[4870]: I0216 17:22:01.689712 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 17:22:02 crc kubenswrapper[4870]: I0216 17:22:02.231119 4870 generic.go:334] "Generic (PLEG): container finished" podID="5dccdc97-f78d-4a2e-9e18-4956fe9fc535" containerID="174b76b80058494b81583b695257ebdd50654383978093ce2c861c57bc68f5cb" exitCode=143 Feb 16 17:22:02 crc kubenswrapper[4870]: I0216 17:22:02.237474 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00a8e7ab-716d-408e-a531-c49194dca35c" path="/var/lib/kubelet/pods/00a8e7ab-716d-408e-a531-c49194dca35c/volumes" Feb 16 17:22:02 crc kubenswrapper[4870]: I0216 17:22:02.238221 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f8f68af-963d-41b6-89a4-1448670e187e" path="/var/lib/kubelet/pods/2f8f68af-963d-41b6-89a4-1448670e187e/volumes" Feb 16 17:22:02 crc kubenswrapper[4870]: I0216 17:22:02.239128 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5dccdc97-f78d-4a2e-9e18-4956fe9fc535","Type":"ContainerDied","Data":"174b76b80058494b81583b695257ebdd50654383978093ce2c861c57bc68f5cb"} Feb 16 17:22:02 crc kubenswrapper[4870]: I0216 17:22:02.334280 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 17:22:02 crc kubenswrapper[4870]: I0216 17:22:02.421003 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 17:22:03 crc kubenswrapper[4870]: I0216 17:22:03.251613 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"66aea147-403d-4f20-837e-2a492e54cb60","Type":"ContainerStarted","Data":"528aed96dcf74144b24aa152ea00a894ebe00b493c062b53214c0ae6a3714e5e"} Feb 16 17:22:03 crc kubenswrapper[4870]: I0216 17:22:03.258579 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"46039c25-14ee-4091-8b9c-8bddcd95d44f","Type":"ContainerStarted","Data":"e813dbfb59a5f28a8fd7630214f461283756925efa7ce34d7d52d72ba52dda54"} Feb 16 17:22:03 crc kubenswrapper[4870]: I0216 17:22:03.258940 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"46039c25-14ee-4091-8b9c-8bddcd95d44f","Type":"ContainerStarted","Data":"ff0ef4d1135ac7f6ecf04c7226204b1217fb82429cfb5037b092e3c106e5b0c8"} Feb 16 17:22:03 crc kubenswrapper[4870]: I0216 17:22:03.266777 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"888818e1-6d0c-455f-85ad-021dd68c2510","Type":"ContainerStarted","Data":"3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab"} Feb 16 17:22:03 crc kubenswrapper[4870]: I0216 17:22:03.266985 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="888818e1-6d0c-455f-85ad-021dd68c2510" containerName="ceilometer-central-agent" containerID="cri-o://2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26" gracePeriod=30 Feb 16 17:22:03 crc kubenswrapper[4870]: I0216 17:22:03.267309 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 17:22:03 crc kubenswrapper[4870]: I0216 17:22:03.267654 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="888818e1-6d0c-455f-85ad-021dd68c2510" containerName="proxy-httpd" containerID="cri-o://3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab" gracePeriod=30 Feb 16 17:22:03 crc kubenswrapper[4870]: I0216 17:22:03.267737 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="888818e1-6d0c-455f-85ad-021dd68c2510" containerName="sg-core" containerID="cri-o://8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87" gracePeriod=30 Feb 16 17:22:03 crc kubenswrapper[4870]: I0216 17:22:03.267790 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="888818e1-6d0c-455f-85ad-021dd68c2510" containerName="ceilometer-notification-agent" containerID="cri-o://ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a" gracePeriod=30 Feb 16 17:22:03 crc kubenswrapper[4870]: I0216 17:22:03.300432 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.762830638 podStartE2EDuration="13.300398216s" podCreationTimestamp="2026-02-16 17:21:50 +0000 UTC" firstStartedPulling="2026-02-16 17:21:51.911165627 +0000 UTC m=+1316.394630011" lastFinishedPulling="2026-02-16 17:22:02.448733195 +0000 UTC m=+1326.932197589" observedRunningTime="2026-02-16 17:22:03.294434596 +0000 UTC m=+1327.777898980" watchObservedRunningTime="2026-02-16 17:22:03.300398216 +0000 UTC m=+1327.783862600" Feb 16 17:22:03 crc kubenswrapper[4870]: E0216 17:22:03.373954 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:22:03 crc kubenswrapper[4870]: E0216 17:22:03.374005 4870 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:22:03 crc kubenswrapper[4870]: E0216 17:22:03.374124 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7b2p7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6hkdm_openstack(34a86750-1fff-4add-8462-7ab805ec7f89): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:22:03 crc kubenswrapper[4870]: E0216 17:22:03.376429 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.151670 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.272298 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4675\" (UniqueName: \"kubernetes.io/projected/888818e1-6d0c-455f-85ad-021dd68c2510-kube-api-access-t4675\") pod \"888818e1-6d0c-455f-85ad-021dd68c2510\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.272401 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-combined-ca-bundle\") pod \"888818e1-6d0c-455f-85ad-021dd68c2510\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.272569 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/888818e1-6d0c-455f-85ad-021dd68c2510-run-httpd\") pod \"888818e1-6d0c-455f-85ad-021dd68c2510\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.272618 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/888818e1-6d0c-455f-85ad-021dd68c2510-log-httpd\") pod \"888818e1-6d0c-455f-85ad-021dd68c2510\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.272742 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-sg-core-conf-yaml\") pod \"888818e1-6d0c-455f-85ad-021dd68c2510\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.272775 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-scripts\") pod \"888818e1-6d0c-455f-85ad-021dd68c2510\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.272847 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-config-data\") pod \"888818e1-6d0c-455f-85ad-021dd68c2510\" (UID: \"888818e1-6d0c-455f-85ad-021dd68c2510\") " Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.274859 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/888818e1-6d0c-455f-85ad-021dd68c2510-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "888818e1-6d0c-455f-85ad-021dd68c2510" (UID: "888818e1-6d0c-455f-85ad-021dd68c2510"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.276595 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/888818e1-6d0c-455f-85ad-021dd68c2510-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "888818e1-6d0c-455f-85ad-021dd68c2510" (UID: "888818e1-6d0c-455f-85ad-021dd68c2510"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.280682 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/888818e1-6d0c-455f-85ad-021dd68c2510-kube-api-access-t4675" (OuterVolumeSpecName: "kube-api-access-t4675") pod "888818e1-6d0c-455f-85ad-021dd68c2510" (UID: "888818e1-6d0c-455f-85ad-021dd68c2510"). InnerVolumeSpecName "kube-api-access-t4675". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.281507 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-scripts" (OuterVolumeSpecName: "scripts") pod "888818e1-6d0c-455f-85ad-021dd68c2510" (UID: "888818e1-6d0c-455f-85ad-021dd68c2510"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.282969 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"66aea147-403d-4f20-837e-2a492e54cb60","Type":"ContainerStarted","Data":"fa896ca03cc11284d9402cf9376c60d78daa9bba46b24402d48520107135b39c"} Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.283020 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"66aea147-403d-4f20-837e-2a492e54cb60","Type":"ContainerStarted","Data":"00e643a77fc09cfcdfc070b48acb5bdb610833ba5e62fdfccca9a8e9bdb28371"} Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.288018 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"46039c25-14ee-4091-8b9c-8bddcd95d44f","Type":"ContainerStarted","Data":"2f6e5405212348e5db07e8a9f9830ed1aca047eea079198158a00768af9317c2"} Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.288855 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.297500 4870 generic.go:334] "Generic (PLEG): container finished" podID="888818e1-6d0c-455f-85ad-021dd68c2510" containerID="3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab" exitCode=0 Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.297537 4870 generic.go:334] "Generic (PLEG): container finished" podID="888818e1-6d0c-455f-85ad-021dd68c2510" containerID="8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87" exitCode=2 Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.297547 4870 generic.go:334] "Generic (PLEG): container finished" podID="888818e1-6d0c-455f-85ad-021dd68c2510" containerID="ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a" exitCode=0 Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.297557 4870 generic.go:334] "Generic (PLEG): container finished" podID="888818e1-6d0c-455f-85ad-021dd68c2510" containerID="2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26" exitCode=0 Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.297582 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"888818e1-6d0c-455f-85ad-021dd68c2510","Type":"ContainerDied","Data":"3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab"} Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.297613 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"888818e1-6d0c-455f-85ad-021dd68c2510","Type":"ContainerDied","Data":"8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87"} Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.297625 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"888818e1-6d0c-455f-85ad-021dd68c2510","Type":"ContainerDied","Data":"ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a"} Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.297634 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"888818e1-6d0c-455f-85ad-021dd68c2510","Type":"ContainerDied","Data":"2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26"} Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.297642 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"888818e1-6d0c-455f-85ad-021dd68c2510","Type":"ContainerDied","Data":"b3f77a003ab83db30e3dff292ac696a554b4d9d55497f1606a110718ed882b6c"} Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.297656 4870 scope.go:117] "RemoveContainer" containerID="3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.297799 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.324637 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.3246057589999998 podStartE2EDuration="3.324605759s" podCreationTimestamp="2026-02-16 17:22:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:04.318207766 +0000 UTC m=+1328.801672150" watchObservedRunningTime="2026-02-16 17:22:04.324605759 +0000 UTC m=+1328.808070143" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.327592 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "888818e1-6d0c-455f-85ad-021dd68c2510" (UID: "888818e1-6d0c-455f-85ad-021dd68c2510"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.357536 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.357516457 podStartE2EDuration="3.357516457s" podCreationTimestamp="2026-02-16 17:22:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:04.353788381 +0000 UTC m=+1328.837252755" watchObservedRunningTime="2026-02-16 17:22:04.357516457 +0000 UTC m=+1328.840980841" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.377452 4870 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/888818e1-6d0c-455f-85ad-021dd68c2510-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.377482 4870 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/888818e1-6d0c-455f-85ad-021dd68c2510-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.377490 4870 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.377499 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.377508 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4675\" (UniqueName: \"kubernetes.io/projected/888818e1-6d0c-455f-85ad-021dd68c2510-kube-api-access-t4675\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.416142 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "888818e1-6d0c-455f-85ad-021dd68c2510" (UID: "888818e1-6d0c-455f-85ad-021dd68c2510"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.467225 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-config-data" (OuterVolumeSpecName: "config-data") pod "888818e1-6d0c-455f-85ad-021dd68c2510" (UID: "888818e1-6d0c-455f-85ad-021dd68c2510"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.473148 4870 scope.go:117] "RemoveContainer" containerID="8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.483109 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.483150 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888818e1-6d0c-455f-85ad-021dd68c2510-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.512448 4870 scope.go:117] "RemoveContainer" containerID="ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.550060 4870 scope.go:117] "RemoveContainer" containerID="2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.566778 4870 scope.go:117] "RemoveContainer" containerID="3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab" Feb 16 17:22:04 crc kubenswrapper[4870]: E0216 17:22:04.567206 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab\": container with ID starting with 3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab not found: ID does not exist" containerID="3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.567239 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab"} err="failed to get container status \"3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab\": rpc error: code = NotFound desc = could not find container \"3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab\": container with ID starting with 3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab not found: ID does not exist" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.567261 4870 scope.go:117] "RemoveContainer" containerID="8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87" Feb 16 17:22:04 crc kubenswrapper[4870]: E0216 17:22:04.567495 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87\": container with ID starting with 8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87 not found: ID does not exist" containerID="8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.567519 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87"} err="failed to get container status \"8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87\": rpc error: code = NotFound desc = could not find container \"8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87\": container with ID starting with 8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87 not found: ID does not exist" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.567534 4870 scope.go:117] "RemoveContainer" containerID="ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a" Feb 16 17:22:04 crc kubenswrapper[4870]: E0216 17:22:04.567835 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a\": container with ID starting with ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a not found: ID does not exist" containerID="ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.567872 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a"} err="failed to get container status \"ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a\": rpc error: code = NotFound desc = could not find container \"ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a\": container with ID starting with ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a not found: ID does not exist" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.567900 4870 scope.go:117] "RemoveContainer" containerID="2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26" Feb 16 17:22:04 crc kubenswrapper[4870]: E0216 17:22:04.568313 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26\": container with ID starting with 2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26 not found: ID does not exist" containerID="2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.568341 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26"} err="failed to get container status \"2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26\": rpc error: code = NotFound desc = could not find container \"2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26\": container with ID starting with 2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26 not found: ID does not exist" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.568356 4870 scope.go:117] "RemoveContainer" containerID="3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.568683 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab"} err="failed to get container status \"3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab\": rpc error: code = NotFound desc = could not find container \"3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab\": container with ID starting with 3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab not found: ID does not exist" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.568733 4870 scope.go:117] "RemoveContainer" containerID="8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.569027 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87"} err="failed to get container status \"8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87\": rpc error: code = NotFound desc = could not find container \"8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87\": container with ID starting with 8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87 not found: ID does not exist" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.569049 4870 scope.go:117] "RemoveContainer" containerID="ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.569230 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a"} err="failed to get container status \"ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a\": rpc error: code = NotFound desc = could not find container \"ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a\": container with ID starting with ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a not found: ID does not exist" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.569257 4870 scope.go:117] "RemoveContainer" containerID="2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.569478 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26"} err="failed to get container status \"2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26\": rpc error: code = NotFound desc = could not find container \"2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26\": container with ID starting with 2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26 not found: ID does not exist" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.569496 4870 scope.go:117] "RemoveContainer" containerID="3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.569682 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab"} err="failed to get container status \"3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab\": rpc error: code = NotFound desc = could not find container \"3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab\": container with ID starting with 3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab not found: ID does not exist" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.569700 4870 scope.go:117] "RemoveContainer" containerID="8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.569853 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87"} err="failed to get container status \"8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87\": rpc error: code = NotFound desc = could not find container \"8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87\": container with ID starting with 8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87 not found: ID does not exist" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.569868 4870 scope.go:117] "RemoveContainer" containerID="ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.570017 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a"} err="failed to get container status \"ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a\": rpc error: code = NotFound desc = could not find container \"ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a\": container with ID starting with ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a not found: ID does not exist" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.570035 4870 scope.go:117] "RemoveContainer" containerID="2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.570177 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26"} err="failed to get container status \"2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26\": rpc error: code = NotFound desc = could not find container \"2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26\": container with ID starting with 2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26 not found: ID does not exist" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.570195 4870 scope.go:117] "RemoveContainer" containerID="3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.570320 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab"} err="failed to get container status \"3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab\": rpc error: code = NotFound desc = could not find container \"3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab\": container with ID starting with 3015ec2da1440b889209f06061920129bfc8480945d84753892c6b117ac188ab not found: ID does not exist" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.570337 4870 scope.go:117] "RemoveContainer" containerID="8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.570663 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87"} err="failed to get container status \"8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87\": rpc error: code = NotFound desc = could not find container \"8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87\": container with ID starting with 8a6d1177fe0a8419f2a2e40adb5d50b7b13194bc95d476f4513378f9a06baa87 not found: ID does not exist" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.570678 4870 scope.go:117] "RemoveContainer" containerID="ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.570805 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a"} err="failed to get container status \"ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a\": rpc error: code = NotFound desc = could not find container \"ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a\": container with ID starting with ac6e74cc06b0c2781676afa83dd14a4594bfd7a81275d567112e167a0b9c1a1a not found: ID does not exist" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.570824 4870 scope.go:117] "RemoveContainer" containerID="2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.570973 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26"} err="failed to get container status \"2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26\": rpc error: code = NotFound desc = could not find container \"2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26\": container with ID starting with 2e020937538251509983e3dd2af5cb1db784f4c36da10ad8f162c1b971601c26 not found: ID does not exist" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.639847 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.656359 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.673360 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:04 crc kubenswrapper[4870]: E0216 17:22:04.674000 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888818e1-6d0c-455f-85ad-021dd68c2510" containerName="ceilometer-central-agent" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.674030 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="888818e1-6d0c-455f-85ad-021dd68c2510" containerName="ceilometer-central-agent" Feb 16 17:22:04 crc kubenswrapper[4870]: E0216 17:22:04.674055 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888818e1-6d0c-455f-85ad-021dd68c2510" containerName="proxy-httpd" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.674067 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="888818e1-6d0c-455f-85ad-021dd68c2510" containerName="proxy-httpd" Feb 16 17:22:04 crc kubenswrapper[4870]: E0216 17:22:04.674094 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888818e1-6d0c-455f-85ad-021dd68c2510" containerName="ceilometer-notification-agent" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.674103 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="888818e1-6d0c-455f-85ad-021dd68c2510" containerName="ceilometer-notification-agent" Feb 16 17:22:04 crc kubenswrapper[4870]: E0216 17:22:04.674122 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888818e1-6d0c-455f-85ad-021dd68c2510" containerName="sg-core" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.674130 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="888818e1-6d0c-455f-85ad-021dd68c2510" containerName="sg-core" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.674376 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="888818e1-6d0c-455f-85ad-021dd68c2510" containerName="proxy-httpd" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.674409 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="888818e1-6d0c-455f-85ad-021dd68c2510" containerName="sg-core" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.674420 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="888818e1-6d0c-455f-85ad-021dd68c2510" containerName="ceilometer-notification-agent" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.674435 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="888818e1-6d0c-455f-85ad-021dd68c2510" containerName="ceilometer-central-agent" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.676923 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.679486 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.679919 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.715554 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.764368 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="5dccdc97-f78d-4a2e-9e18-4956fe9fc535" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.173:9292/healthcheck\": read tcp 10.217.0.2:53880->10.217.0.173:9292: read: connection reset by peer" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.764362 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="5dccdc97-f78d-4a2e-9e18-4956fe9fc535" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.173:9292/healthcheck\": read tcp 10.217.0.2:53868->10.217.0.173:9292: read: connection reset by peer" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.788369 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.788444 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.788498 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-scripts\") pod \"ceilometer-0\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.788513 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-log-httpd\") pod \"ceilometer-0\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.788775 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-config-data\") pod \"ceilometer-0\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.788821 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-run-httpd\") pod \"ceilometer-0\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.788840 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pz9l\" (UniqueName: \"kubernetes.io/projected/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-kube-api-access-7pz9l\") pod \"ceilometer-0\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.890501 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-config-data\") pod \"ceilometer-0\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.891284 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-run-httpd\") pod \"ceilometer-0\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.891321 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pz9l\" (UniqueName: \"kubernetes.io/projected/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-kube-api-access-7pz9l\") pod \"ceilometer-0\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.891559 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.891673 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.891753 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-scripts\") pod \"ceilometer-0\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.891781 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-log-httpd\") pod \"ceilometer-0\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.891831 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-run-httpd\") pod \"ceilometer-0\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.892145 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-log-httpd\") pod \"ceilometer-0\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.895813 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.896572 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-scripts\") pod \"ceilometer-0\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.897015 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-config-data\") pod \"ceilometer-0\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.899227 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.910497 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pz9l\" (UniqueName: \"kubernetes.io/projected/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-kube-api-access-7pz9l\") pod \"ceilometer-0\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " pod="openstack/ceilometer-0" Feb 16 17:22:04 crc kubenswrapper[4870]: I0216 17:22:04.993036 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.309973 4870 generic.go:334] "Generic (PLEG): container finished" podID="5dccdc97-f78d-4a2e-9e18-4956fe9fc535" containerID="d4a81ce2df993166d092df61d9ef89ce07467e6e0f69904c8303f8d50d8733c2" exitCode=0 Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.310306 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5dccdc97-f78d-4a2e-9e18-4956fe9fc535","Type":"ContainerDied","Data":"d4a81ce2df993166d092df61d9ef89ce07467e6e0f69904c8303f8d50d8733c2"} Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.310333 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5dccdc97-f78d-4a2e-9e18-4956fe9fc535","Type":"ContainerDied","Data":"b3dd6b1e9d78d60643440240c934d9275fd87985c0188af0263b3580c0061a02"} Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.310343 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3dd6b1e9d78d60643440240c934d9275fd87985c0188af0263b3580c0061a02" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.358954 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.370621 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.370679 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.370743 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.371651 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a26cade4c570777b8e6874ae4e148783c7ff0c66ca799ca6a024730b89056882"} pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.371695 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" containerID="cri-o://a26cade4c570777b8e6874ae4e148783c7ff0c66ca799ca6a024730b89056882" gracePeriod=600 Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.409175 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-logs\") pod \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.409368 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-httpd-run\") pod \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.409474 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-scripts\") pod \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.409644 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-config-data\") pod \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.409750 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5dccdc97-f78d-4a2e-9e18-4956fe9fc535" (UID: "5dccdc97-f78d-4a2e-9e18-4956fe9fc535"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.409775 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-combined-ca-bundle\") pod \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.409813 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-logs" (OuterVolumeSpecName: "logs") pod "5dccdc97-f78d-4a2e-9e18-4956fe9fc535" (UID: "5dccdc97-f78d-4a2e-9e18-4956fe9fc535"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.409883 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-public-tls-certs\") pod \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.410123 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn4zz\" (UniqueName: \"kubernetes.io/projected/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-kube-api-access-sn4zz\") pod \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.410510 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\") pod \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\" (UID: \"5dccdc97-f78d-4a2e-9e18-4956fe9fc535\") " Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.411620 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.411645 4870 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.431284 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-kube-api-access-sn4zz" (OuterVolumeSpecName: "kube-api-access-sn4zz") pod "5dccdc97-f78d-4a2e-9e18-4956fe9fc535" (UID: "5dccdc97-f78d-4a2e-9e18-4956fe9fc535"). InnerVolumeSpecName "kube-api-access-sn4zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.446152 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-scripts" (OuterVolumeSpecName: "scripts") pod "5dccdc97-f78d-4a2e-9e18-4956fe9fc535" (UID: "5dccdc97-f78d-4a2e-9e18-4956fe9fc535"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.453929 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae" (OuterVolumeSpecName: "glance") pod "5dccdc97-f78d-4a2e-9e18-4956fe9fc535" (UID: "5dccdc97-f78d-4a2e-9e18-4956fe9fc535"). InnerVolumeSpecName "pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.485194 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5dccdc97-f78d-4a2e-9e18-4956fe9fc535" (UID: "5dccdc97-f78d-4a2e-9e18-4956fe9fc535"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.496286 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-config-data" (OuterVolumeSpecName: "config-data") pod "5dccdc97-f78d-4a2e-9e18-4956fe9fc535" (UID: "5dccdc97-f78d-4a2e-9e18-4956fe9fc535"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.523558 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn4zz\" (UniqueName: \"kubernetes.io/projected/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-kube-api-access-sn4zz\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.523623 4870 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\") on node \"crc\" " Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.523640 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.523652 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.523665 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.527960 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5dccdc97-f78d-4a2e-9e18-4956fe9fc535" (UID: "5dccdc97-f78d-4a2e-9e18-4956fe9fc535"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.585616 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.609850 4870 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.610076 4870 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae") on node "crc" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.627916 4870 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dccdc97-f78d-4a2e-9e18-4956fe9fc535-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:05 crc kubenswrapper[4870]: I0216 17:22:05.627965 4870 reconciler_common.go:293] "Volume detached for volume \"pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.237395 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="888818e1-6d0c-455f-85ad-021dd68c2510" path="/var/lib/kubelet/pods/888818e1-6d0c-455f-85ad-021dd68c2510/volumes" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.325537 4870 generic.go:334] "Generic (PLEG): container finished" podID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerID="a26cade4c570777b8e6874ae4e148783c7ff0c66ca799ca6a024730b89056882" exitCode=0 Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.325610 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerDied","Data":"a26cade4c570777b8e6874ae4e148783c7ff0c66ca799ca6a024730b89056882"} Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.325640 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerStarted","Data":"c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6"} Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.325657 4870 scope.go:117] "RemoveContainer" containerID="c6cb73ad3168219aed3caa65ecbcfeaf20afa41eba328438ce91697a527d897b" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.330212 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.331382 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7c59a89-070d-49a5-8e0a-a8f39e4cd000","Type":"ContainerStarted","Data":"70d616c165c133800e91397a0de1e40aa147d506644c619bd30847021f65a878"} Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.331420 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7c59a89-070d-49a5-8e0a-a8f39e4cd000","Type":"ContainerStarted","Data":"bd986942181184c00c272539c09493c86782e8b29e32d411e3ad1646a03cb15b"} Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.376746 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.397757 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.413515 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:22:06 crc kubenswrapper[4870]: E0216 17:22:06.414095 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dccdc97-f78d-4a2e-9e18-4956fe9fc535" containerName="glance-httpd" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.414122 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dccdc97-f78d-4a2e-9e18-4956fe9fc535" containerName="glance-httpd" Feb 16 17:22:06 crc kubenswrapper[4870]: E0216 17:22:06.414166 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dccdc97-f78d-4a2e-9e18-4956fe9fc535" containerName="glance-log" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.414176 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dccdc97-f78d-4a2e-9e18-4956fe9fc535" containerName="glance-log" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.414452 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dccdc97-f78d-4a2e-9e18-4956fe9fc535" containerName="glance-httpd" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.414487 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dccdc97-f78d-4a2e-9e18-4956fe9fc535" containerName="glance-log" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.415896 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.420280 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.421993 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.423121 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.547157 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.547468 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59-config-data\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.547499 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59-logs\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.547518 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59-scripts\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.547547 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.547620 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4rrr\" (UniqueName: \"kubernetes.io/projected/ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59-kube-api-access-x4rrr\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.547772 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.547857 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.649933 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.650136 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59-config-data\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.650180 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59-logs\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.650206 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59-scripts\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.650230 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.650265 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4rrr\" (UniqueName: \"kubernetes.io/projected/ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59-kube-api-access-x4rrr\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.650342 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.650377 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.651154 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.651315 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59-logs\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.655454 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59-scripts\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.655807 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59-config-data\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.656538 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.659625 4870 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.659662 4870 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a6e7916025a705e37bf42169fdb7099d11d639aaccb3a9ce061702c007eb46f2/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.659992 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.672626 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4rrr\" (UniqueName: \"kubernetes.io/projected/ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59-kube-api-access-x4rrr\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.712632 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c1bad89-2882-44f3-82bf-b7b8c12e02ae\") pod \"glance-default-external-api-0\" (UID: \"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59\") " pod="openstack/glance-default-external-api-0" Feb 16 17:22:06 crc kubenswrapper[4870]: I0216 17:22:06.752052 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 17:22:07 crc kubenswrapper[4870]: I0216 17:22:07.313803 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 17:22:07 crc kubenswrapper[4870]: I0216 17:22:07.340119 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59","Type":"ContainerStarted","Data":"92d71368924b5d269be5bc5b029b4d0950a05f730e6a7cb5557a95499acd538a"} Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.169540 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-jbd2m"] Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.171596 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jbd2m" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.186368 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jbd2m"] Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.276404 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dccdc97-f78d-4a2e-9e18-4956fe9fc535" path="/var/lib/kubelet/pods/5dccdc97-f78d-4a2e-9e18-4956fe9fc535/volumes" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.277265 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-4jdm2"] Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.279629 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4jdm2" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.281798 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n75w2\" (UniqueName: \"kubernetes.io/projected/22bac8e0-a77f-44bc-8011-3f676864a0e1-kube-api-access-n75w2\") pod \"nova-api-db-create-jbd2m\" (UID: \"22bac8e0-a77f-44bc-8011-3f676864a0e1\") " pod="openstack/nova-api-db-create-jbd2m" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.281875 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22bac8e0-a77f-44bc-8011-3f676864a0e1-operator-scripts\") pod \"nova-api-db-create-jbd2m\" (UID: \"22bac8e0-a77f-44bc-8011-3f676864a0e1\") " pod="openstack/nova-api-db-create-jbd2m" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.338145 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4jdm2"] Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.376349 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-663e-account-create-update-ndbzz"] Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.377895 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-663e-account-create-update-ndbzz" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.384368 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.385916 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a0c8689-ac35-4b83-99dc-bdda1ce12ec8-operator-scripts\") pod \"nova-cell0-db-create-4jdm2\" (UID: \"2a0c8689-ac35-4b83-99dc-bdda1ce12ec8\") " pod="openstack/nova-cell0-db-create-4jdm2" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.387460 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n75w2\" (UniqueName: \"kubernetes.io/projected/22bac8e0-a77f-44bc-8011-3f676864a0e1-kube-api-access-n75w2\") pod \"nova-api-db-create-jbd2m\" (UID: \"22bac8e0-a77f-44bc-8011-3f676864a0e1\") " pod="openstack/nova-api-db-create-jbd2m" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.387524 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22bac8e0-a77f-44bc-8011-3f676864a0e1-operator-scripts\") pod \"nova-api-db-create-jbd2m\" (UID: \"22bac8e0-a77f-44bc-8011-3f676864a0e1\") " pod="openstack/nova-api-db-create-jbd2m" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.387574 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7mfq\" (UniqueName: \"kubernetes.io/projected/2a0c8689-ac35-4b83-99dc-bdda1ce12ec8-kube-api-access-p7mfq\") pod \"nova-cell0-db-create-4jdm2\" (UID: \"2a0c8689-ac35-4b83-99dc-bdda1ce12ec8\") " pod="openstack/nova-cell0-db-create-4jdm2" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.405277 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22bac8e0-a77f-44bc-8011-3f676864a0e1-operator-scripts\") pod \"nova-api-db-create-jbd2m\" (UID: \"22bac8e0-a77f-44bc-8011-3f676864a0e1\") " pod="openstack/nova-api-db-create-jbd2m" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.436263 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n75w2\" (UniqueName: \"kubernetes.io/projected/22bac8e0-a77f-44bc-8011-3f676864a0e1-kube-api-access-n75w2\") pod \"nova-api-db-create-jbd2m\" (UID: \"22bac8e0-a77f-44bc-8011-3f676864a0e1\") " pod="openstack/nova-api-db-create-jbd2m" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.436664 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-663e-account-create-update-ndbzz"] Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.474030 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7c59a89-070d-49a5-8e0a-a8f39e4cd000","Type":"ContainerStarted","Data":"31896bf1967d642420d6addf6e0fc91bb85060be50487c1c869d6df557f3df09"} Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.490654 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9a05c82-dcac-4d4c-8309-8c2d6389b31b-operator-scripts\") pod \"nova-api-663e-account-create-update-ndbzz\" (UID: \"f9a05c82-dcac-4d4c-8309-8c2d6389b31b\") " pod="openstack/nova-api-663e-account-create-update-ndbzz" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.490745 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7mfq\" (UniqueName: \"kubernetes.io/projected/2a0c8689-ac35-4b83-99dc-bdda1ce12ec8-kube-api-access-p7mfq\") pod \"nova-cell0-db-create-4jdm2\" (UID: \"2a0c8689-ac35-4b83-99dc-bdda1ce12ec8\") " pod="openstack/nova-cell0-db-create-4jdm2" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.490845 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwwt9\" (UniqueName: \"kubernetes.io/projected/f9a05c82-dcac-4d4c-8309-8c2d6389b31b-kube-api-access-lwwt9\") pod \"nova-api-663e-account-create-update-ndbzz\" (UID: \"f9a05c82-dcac-4d4c-8309-8c2d6389b31b\") " pod="openstack/nova-api-663e-account-create-update-ndbzz" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.490912 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a0c8689-ac35-4b83-99dc-bdda1ce12ec8-operator-scripts\") pod \"nova-cell0-db-create-4jdm2\" (UID: \"2a0c8689-ac35-4b83-99dc-bdda1ce12ec8\") " pod="openstack/nova-cell0-db-create-4jdm2" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.491597 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a0c8689-ac35-4b83-99dc-bdda1ce12ec8-operator-scripts\") pod \"nova-cell0-db-create-4jdm2\" (UID: \"2a0c8689-ac35-4b83-99dc-bdda1ce12ec8\") " pod="openstack/nova-cell0-db-create-4jdm2" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.492498 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jbd2m" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.493136 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.495163 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59","Type":"ContainerStarted","Data":"c62832111d507c88a9e10235ade5a174968b66d0f53c85d26cfdc574e012416a"} Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.546606 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7mfq\" (UniqueName: \"kubernetes.io/projected/2a0c8689-ac35-4b83-99dc-bdda1ce12ec8-kube-api-access-p7mfq\") pod \"nova-cell0-db-create-4jdm2\" (UID: \"2a0c8689-ac35-4b83-99dc-bdda1ce12ec8\") " pod="openstack/nova-cell0-db-create-4jdm2" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.598119 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9a05c82-dcac-4d4c-8309-8c2d6389b31b-operator-scripts\") pod \"nova-api-663e-account-create-update-ndbzz\" (UID: \"f9a05c82-dcac-4d4c-8309-8c2d6389b31b\") " pod="openstack/nova-api-663e-account-create-update-ndbzz" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.598479 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwwt9\" (UniqueName: \"kubernetes.io/projected/f9a05c82-dcac-4d4c-8309-8c2d6389b31b-kube-api-access-lwwt9\") pod \"nova-api-663e-account-create-update-ndbzz\" (UID: \"f9a05c82-dcac-4d4c-8309-8c2d6389b31b\") " pod="openstack/nova-api-663e-account-create-update-ndbzz" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.603196 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9a05c82-dcac-4d4c-8309-8c2d6389b31b-operator-scripts\") pod \"nova-api-663e-account-create-update-ndbzz\" (UID: \"f9a05c82-dcac-4d4c-8309-8c2d6389b31b\") " pod="openstack/nova-api-663e-account-create-update-ndbzz" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.625702 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4jdm2" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.627837 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-vxqf6"] Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.629613 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vxqf6" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.660611 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwwt9\" (UniqueName: \"kubernetes.io/projected/f9a05c82-dcac-4d4c-8309-8c2d6389b31b-kube-api-access-lwwt9\") pod \"nova-api-663e-account-create-update-ndbzz\" (UID: \"f9a05c82-dcac-4d4c-8309-8c2d6389b31b\") " pod="openstack/nova-api-663e-account-create-update-ndbzz" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.691962 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vxqf6"] Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.736013 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-beba-account-create-update-br42d"] Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.737925 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-beba-account-create-update-br42d" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.755290 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.762013 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-beba-account-create-update-br42d"] Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.799467 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-663e-account-create-update-ndbzz" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.806480 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c223101-34a3-41b4-b6ce-5eb5d05692ac-operator-scripts\") pod \"nova-cell1-db-create-vxqf6\" (UID: \"9c223101-34a3-41b4-b6ce-5eb5d05692ac\") " pod="openstack/nova-cell1-db-create-vxqf6" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.806644 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-776m2\" (UniqueName: \"kubernetes.io/projected/9c223101-34a3-41b4-b6ce-5eb5d05692ac-kube-api-access-776m2\") pod \"nova-cell1-db-create-vxqf6\" (UID: \"9c223101-34a3-41b4-b6ce-5eb5d05692ac\") " pod="openstack/nova-cell1-db-create-vxqf6" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.901125 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-da3e-account-create-update-q4dwp"] Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.908258 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-da3e-account-create-update-q4dwp" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.917655 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece93484-8813-4d47-a826-9a8f66cd6d78-operator-scripts\") pod \"nova-cell1-da3e-account-create-update-q4dwp\" (UID: \"ece93484-8813-4d47-a826-9a8f66cd6d78\") " pod="openstack/nova-cell1-da3e-account-create-update-q4dwp" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.917704 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8c81151-3dea-4340-821f-1e6d7df36926-operator-scripts\") pod \"nova-cell0-beba-account-create-update-br42d\" (UID: \"d8c81151-3dea-4340-821f-1e6d7df36926\") " pod="openstack/nova-cell0-beba-account-create-update-br42d" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.917787 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c223101-34a3-41b4-b6ce-5eb5d05692ac-operator-scripts\") pod \"nova-cell1-db-create-vxqf6\" (UID: \"9c223101-34a3-41b4-b6ce-5eb5d05692ac\") " pod="openstack/nova-cell1-db-create-vxqf6" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.917837 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfwfd\" (UniqueName: \"kubernetes.io/projected/d8c81151-3dea-4340-821f-1e6d7df36926-kube-api-access-tfwfd\") pod \"nova-cell0-beba-account-create-update-br42d\" (UID: \"d8c81151-3dea-4340-821f-1e6d7df36926\") " pod="openstack/nova-cell0-beba-account-create-update-br42d" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.917915 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppn52\" (UniqueName: \"kubernetes.io/projected/ece93484-8813-4d47-a826-9a8f66cd6d78-kube-api-access-ppn52\") pod \"nova-cell1-da3e-account-create-update-q4dwp\" (UID: \"ece93484-8813-4d47-a826-9a8f66cd6d78\") " pod="openstack/nova-cell1-da3e-account-create-update-q4dwp" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.917973 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-776m2\" (UniqueName: \"kubernetes.io/projected/9c223101-34a3-41b4-b6ce-5eb5d05692ac-kube-api-access-776m2\") pod \"nova-cell1-db-create-vxqf6\" (UID: \"9c223101-34a3-41b4-b6ce-5eb5d05692ac\") " pod="openstack/nova-cell1-db-create-vxqf6" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.921241 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.941786 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c223101-34a3-41b4-b6ce-5eb5d05692ac-operator-scripts\") pod \"nova-cell1-db-create-vxqf6\" (UID: \"9c223101-34a3-41b4-b6ce-5eb5d05692ac\") " pod="openstack/nova-cell1-db-create-vxqf6" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.960764 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-776m2\" (UniqueName: \"kubernetes.io/projected/9c223101-34a3-41b4-b6ce-5eb5d05692ac-kube-api-access-776m2\") pod \"nova-cell1-db-create-vxqf6\" (UID: \"9c223101-34a3-41b4-b6ce-5eb5d05692ac\") " pod="openstack/nova-cell1-db-create-vxqf6" Feb 16 17:22:08 crc kubenswrapper[4870]: I0216 17:22:08.982258 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-da3e-account-create-update-q4dwp"] Feb 16 17:22:09 crc kubenswrapper[4870]: I0216 17:22:09.019344 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfwfd\" (UniqueName: \"kubernetes.io/projected/d8c81151-3dea-4340-821f-1e6d7df36926-kube-api-access-tfwfd\") pod \"nova-cell0-beba-account-create-update-br42d\" (UID: \"d8c81151-3dea-4340-821f-1e6d7df36926\") " pod="openstack/nova-cell0-beba-account-create-update-br42d" Feb 16 17:22:09 crc kubenswrapper[4870]: I0216 17:22:09.035857 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppn52\" (UniqueName: \"kubernetes.io/projected/ece93484-8813-4d47-a826-9a8f66cd6d78-kube-api-access-ppn52\") pod \"nova-cell1-da3e-account-create-update-q4dwp\" (UID: \"ece93484-8813-4d47-a826-9a8f66cd6d78\") " pod="openstack/nova-cell1-da3e-account-create-update-q4dwp" Feb 16 17:22:09 crc kubenswrapper[4870]: I0216 17:22:09.036037 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece93484-8813-4d47-a826-9a8f66cd6d78-operator-scripts\") pod \"nova-cell1-da3e-account-create-update-q4dwp\" (UID: \"ece93484-8813-4d47-a826-9a8f66cd6d78\") " pod="openstack/nova-cell1-da3e-account-create-update-q4dwp" Feb 16 17:22:09 crc kubenswrapper[4870]: I0216 17:22:09.036074 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8c81151-3dea-4340-821f-1e6d7df36926-operator-scripts\") pod \"nova-cell0-beba-account-create-update-br42d\" (UID: \"d8c81151-3dea-4340-821f-1e6d7df36926\") " pod="openstack/nova-cell0-beba-account-create-update-br42d" Feb 16 17:22:09 crc kubenswrapper[4870]: I0216 17:22:09.037035 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8c81151-3dea-4340-821f-1e6d7df36926-operator-scripts\") pod \"nova-cell0-beba-account-create-update-br42d\" (UID: \"d8c81151-3dea-4340-821f-1e6d7df36926\") " pod="openstack/nova-cell0-beba-account-create-update-br42d" Feb 16 17:22:09 crc kubenswrapper[4870]: I0216 17:22:09.039128 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece93484-8813-4d47-a826-9a8f66cd6d78-operator-scripts\") pod \"nova-cell1-da3e-account-create-update-q4dwp\" (UID: \"ece93484-8813-4d47-a826-9a8f66cd6d78\") " pod="openstack/nova-cell1-da3e-account-create-update-q4dwp" Feb 16 17:22:09 crc kubenswrapper[4870]: I0216 17:22:09.089667 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppn52\" (UniqueName: \"kubernetes.io/projected/ece93484-8813-4d47-a826-9a8f66cd6d78-kube-api-access-ppn52\") pod \"nova-cell1-da3e-account-create-update-q4dwp\" (UID: \"ece93484-8813-4d47-a826-9a8f66cd6d78\") " pod="openstack/nova-cell1-da3e-account-create-update-q4dwp" Feb 16 17:22:09 crc kubenswrapper[4870]: I0216 17:22:09.090173 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfwfd\" (UniqueName: \"kubernetes.io/projected/d8c81151-3dea-4340-821f-1e6d7df36926-kube-api-access-tfwfd\") pod \"nova-cell0-beba-account-create-update-br42d\" (UID: \"d8c81151-3dea-4340-821f-1e6d7df36926\") " pod="openstack/nova-cell0-beba-account-create-update-br42d" Feb 16 17:22:09 crc kubenswrapper[4870]: I0216 17:22:09.141238 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-beba-account-create-update-br42d" Feb 16 17:22:09 crc kubenswrapper[4870]: I0216 17:22:09.251917 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vxqf6" Feb 16 17:22:09 crc kubenswrapper[4870]: I0216 17:22:09.270490 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-da3e-account-create-update-q4dwp" Feb 16 17:22:09 crc kubenswrapper[4870]: I0216 17:22:09.324783 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jbd2m"] Feb 16 17:22:09 crc kubenswrapper[4870]: I0216 17:22:09.522222 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7c59a89-070d-49a5-8e0a-a8f39e4cd000","Type":"ContainerStarted","Data":"f2e8139cf217c7ee26e844b65cbe300034e800ff1b60db4e50c7919cc7051c75"} Feb 16 17:22:09 crc kubenswrapper[4870]: I0216 17:22:09.525227 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59","Type":"ContainerStarted","Data":"a67774d4aa1bb411fc225c6b8f3864407d59bca9c7129b6e264406b56997dcdf"} Feb 16 17:22:09 crc kubenswrapper[4870]: I0216 17:22:09.529994 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jbd2m" event={"ID":"22bac8e0-a77f-44bc-8011-3f676864a0e1","Type":"ContainerStarted","Data":"af538ec8dc0950874d90389d571b4ebced417a2efe70aaae41f8fd26dc0d239c"} Feb 16 17:22:09 crc kubenswrapper[4870]: I0216 17:22:09.581467 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.579904344 podStartE2EDuration="3.579904344s" podCreationTimestamp="2026-02-16 17:22:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:09.55626416 +0000 UTC m=+1334.039728564" watchObservedRunningTime="2026-02-16 17:22:09.579904344 +0000 UTC m=+1334.063368728" Feb 16 17:22:09 crc kubenswrapper[4870]: I0216 17:22:09.591867 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-663e-account-create-update-ndbzz"] Feb 16 17:22:09 crc kubenswrapper[4870]: W0216 17:22:09.597529 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a0c8689_ac35_4b83_99dc_bdda1ce12ec8.slice/crio-cf2c573b9abed36de70952b9704df2e9ac165366de90d44c9c09fb7a79bd5ab4 WatchSource:0}: Error finding container cf2c573b9abed36de70952b9704df2e9ac165366de90d44c9c09fb7a79bd5ab4: Status 404 returned error can't find the container with id cf2c573b9abed36de70952b9704df2e9ac165366de90d44c9c09fb7a79bd5ab4 Feb 16 17:22:09 crc kubenswrapper[4870]: I0216 17:22:09.615598 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4jdm2"] Feb 16 17:22:09 crc kubenswrapper[4870]: I0216 17:22:09.904186 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-beba-account-create-update-br42d"] Feb 16 17:22:10 crc kubenswrapper[4870]: I0216 17:22:10.074961 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vxqf6"] Feb 16 17:22:10 crc kubenswrapper[4870]: W0216 17:22:10.078987 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c223101_34a3_41b4_b6ce_5eb5d05692ac.slice/crio-854b235801772b9982ea95a77271ed0be6cba651aa56d2d234fcb0521bfe1975 WatchSource:0}: Error finding container 854b235801772b9982ea95a77271ed0be6cba651aa56d2d234fcb0521bfe1975: Status 404 returned error can't find the container with id 854b235801772b9982ea95a77271ed0be6cba651aa56d2d234fcb0521bfe1975 Feb 16 17:22:10 crc kubenswrapper[4870]: W0216 17:22:10.087995 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podece93484_8813_4d47_a826_9a8f66cd6d78.slice/crio-b8c034466da8dbfa55e2f8175cd2ab79075c07516365371d048218903c28260c WatchSource:0}: Error finding container b8c034466da8dbfa55e2f8175cd2ab79075c07516365371d048218903c28260c: Status 404 returned error can't find the container with id b8c034466da8dbfa55e2f8175cd2ab79075c07516365371d048218903c28260c Feb 16 17:22:10 crc kubenswrapper[4870]: I0216 17:22:10.097764 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-da3e-account-create-update-q4dwp"] Feb 16 17:22:10 crc kubenswrapper[4870]: I0216 17:22:10.540080 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vxqf6" event={"ID":"9c223101-34a3-41b4-b6ce-5eb5d05692ac","Type":"ContainerStarted","Data":"7fc263b14fed307298a467d214e3045c620ef37d81e34d537321be18fc49a3b1"} Feb 16 17:22:10 crc kubenswrapper[4870]: I0216 17:22:10.540459 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vxqf6" event={"ID":"9c223101-34a3-41b4-b6ce-5eb5d05692ac","Type":"ContainerStarted","Data":"854b235801772b9982ea95a77271ed0be6cba651aa56d2d234fcb0521bfe1975"} Feb 16 17:22:10 crc kubenswrapper[4870]: I0216 17:22:10.542035 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-da3e-account-create-update-q4dwp" event={"ID":"ece93484-8813-4d47-a826-9a8f66cd6d78","Type":"ContainerStarted","Data":"687f030256e15077d250143c31028556dd38e9d94629746cb353d0b2be032dc2"} Feb 16 17:22:10 crc kubenswrapper[4870]: I0216 17:22:10.542063 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-da3e-account-create-update-q4dwp" event={"ID":"ece93484-8813-4d47-a826-9a8f66cd6d78","Type":"ContainerStarted","Data":"b8c034466da8dbfa55e2f8175cd2ab79075c07516365371d048218903c28260c"} Feb 16 17:22:10 crc kubenswrapper[4870]: I0216 17:22:10.543715 4870 generic.go:334] "Generic (PLEG): container finished" podID="22bac8e0-a77f-44bc-8011-3f676864a0e1" containerID="a5ebf26825258c524a9df6e63ad4652e66a39809bedd54e034ff01acf8cffcf3" exitCode=0 Feb 16 17:22:10 crc kubenswrapper[4870]: I0216 17:22:10.543749 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jbd2m" event={"ID":"22bac8e0-a77f-44bc-8011-3f676864a0e1","Type":"ContainerDied","Data":"a5ebf26825258c524a9df6e63ad4652e66a39809bedd54e034ff01acf8cffcf3"} Feb 16 17:22:10 crc kubenswrapper[4870]: I0216 17:22:10.545175 4870 generic.go:334] "Generic (PLEG): container finished" podID="2a0c8689-ac35-4b83-99dc-bdda1ce12ec8" containerID="ebf8cf98170dfc3a536285ada29bbd4953a1a82da68c25c10575bc65adc2fe98" exitCode=0 Feb 16 17:22:10 crc kubenswrapper[4870]: I0216 17:22:10.545220 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4jdm2" event={"ID":"2a0c8689-ac35-4b83-99dc-bdda1ce12ec8","Type":"ContainerDied","Data":"ebf8cf98170dfc3a536285ada29bbd4953a1a82da68c25c10575bc65adc2fe98"} Feb 16 17:22:10 crc kubenswrapper[4870]: I0216 17:22:10.545244 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4jdm2" event={"ID":"2a0c8689-ac35-4b83-99dc-bdda1ce12ec8","Type":"ContainerStarted","Data":"cf2c573b9abed36de70952b9704df2e9ac165366de90d44c9c09fb7a79bd5ab4"} Feb 16 17:22:10 crc kubenswrapper[4870]: I0216 17:22:10.546651 4870 generic.go:334] "Generic (PLEG): container finished" podID="f9a05c82-dcac-4d4c-8309-8c2d6389b31b" containerID="0a64b818f88149d581ddc32e571e47e79beac8074bb1cb6357b4389314c16a49" exitCode=0 Feb 16 17:22:10 crc kubenswrapper[4870]: I0216 17:22:10.546688 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-663e-account-create-update-ndbzz" event={"ID":"f9a05c82-dcac-4d4c-8309-8c2d6389b31b","Type":"ContainerDied","Data":"0a64b818f88149d581ddc32e571e47e79beac8074bb1cb6357b4389314c16a49"} Feb 16 17:22:10 crc kubenswrapper[4870]: I0216 17:22:10.546701 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-663e-account-create-update-ndbzz" event={"ID":"f9a05c82-dcac-4d4c-8309-8c2d6389b31b","Type":"ContainerStarted","Data":"115420ffa8b1b47b147f53f1fbcb0a29c6f9c4e5d1656b91bdc9c5b7bc2064e0"} Feb 16 17:22:10 crc kubenswrapper[4870]: I0216 17:22:10.549421 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-beba-account-create-update-br42d" event={"ID":"d8c81151-3dea-4340-821f-1e6d7df36926","Type":"ContainerStarted","Data":"0ca7c20f99426ba29cae5f3638ad4149c26ad729c07e82fbe78a38671dfc92ce"} Feb 16 17:22:10 crc kubenswrapper[4870]: I0216 17:22:10.549446 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-beba-account-create-update-br42d" event={"ID":"d8c81151-3dea-4340-821f-1e6d7df36926","Type":"ContainerStarted","Data":"c80e980e89c7ae41763c62e605f8c35e8f4e5a0897332567a674923f6fc02f8d"} Feb 16 17:22:10 crc kubenswrapper[4870]: I0216 17:22:10.578262 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-vxqf6" podStartSLOduration=2.578247459 podStartE2EDuration="2.578247459s" podCreationTimestamp="2026-02-16 17:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:10.564515248 +0000 UTC m=+1335.047979632" watchObservedRunningTime="2026-02-16 17:22:10.578247459 +0000 UTC m=+1335.061711843" Feb 16 17:22:10 crc kubenswrapper[4870]: I0216 17:22:10.636210 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-beba-account-create-update-br42d" podStartSLOduration=2.636190601 podStartE2EDuration="2.636190601s" podCreationTimestamp="2026-02-16 17:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:10.633734461 +0000 UTC m=+1335.117198865" watchObservedRunningTime="2026-02-16 17:22:10.636190601 +0000 UTC m=+1335.119654985" Feb 16 17:22:10 crc kubenswrapper[4870]: I0216 17:22:10.650710 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-da3e-account-create-update-q4dwp" podStartSLOduration=2.650687325 podStartE2EDuration="2.650687325s" podCreationTimestamp="2026-02-16 17:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:10.645586549 +0000 UTC m=+1335.129050943" watchObservedRunningTime="2026-02-16 17:22:10.650687325 +0000 UTC m=+1335.134151709" Feb 16 17:22:11 crc kubenswrapper[4870]: I0216 17:22:11.561193 4870 generic.go:334] "Generic (PLEG): container finished" podID="d8c81151-3dea-4340-821f-1e6d7df36926" containerID="0ca7c20f99426ba29cae5f3638ad4149c26ad729c07e82fbe78a38671dfc92ce" exitCode=0 Feb 16 17:22:11 crc kubenswrapper[4870]: I0216 17:22:11.561297 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-beba-account-create-update-br42d" event={"ID":"d8c81151-3dea-4340-821f-1e6d7df36926","Type":"ContainerDied","Data":"0ca7c20f99426ba29cae5f3638ad4149c26ad729c07e82fbe78a38671dfc92ce"} Feb 16 17:22:11 crc kubenswrapper[4870]: I0216 17:22:11.566602 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7c59a89-070d-49a5-8e0a-a8f39e4cd000","Type":"ContainerStarted","Data":"f6ed0d9b72cb585a522107dbf52d14b1c36255dab8899e38e96483cd9afd4f2c"} Feb 16 17:22:11 crc kubenswrapper[4870]: I0216 17:22:11.566752 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 17:22:11 crc kubenswrapper[4870]: I0216 17:22:11.566789 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7c59a89-070d-49a5-8e0a-a8f39e4cd000" containerName="sg-core" containerID="cri-o://f2e8139cf217c7ee26e844b65cbe300034e800ff1b60db4e50c7919cc7051c75" gracePeriod=30 Feb 16 17:22:11 crc kubenswrapper[4870]: I0216 17:22:11.566802 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7c59a89-070d-49a5-8e0a-a8f39e4cd000" containerName="ceilometer-notification-agent" containerID="cri-o://31896bf1967d642420d6addf6e0fc91bb85060be50487c1c869d6df557f3df09" gracePeriod=30 Feb 16 17:22:11 crc kubenswrapper[4870]: I0216 17:22:11.566844 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7c59a89-070d-49a5-8e0a-a8f39e4cd000" containerName="ceilometer-central-agent" containerID="cri-o://70d616c165c133800e91397a0de1e40aa147d506644c619bd30847021f65a878" gracePeriod=30 Feb 16 17:22:11 crc kubenswrapper[4870]: I0216 17:22:11.566809 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7c59a89-070d-49a5-8e0a-a8f39e4cd000" containerName="proxy-httpd" containerID="cri-o://f6ed0d9b72cb585a522107dbf52d14b1c36255dab8899e38e96483cd9afd4f2c" gracePeriod=30 Feb 16 17:22:11 crc kubenswrapper[4870]: I0216 17:22:11.569024 4870 generic.go:334] "Generic (PLEG): container finished" podID="9c223101-34a3-41b4-b6ce-5eb5d05692ac" containerID="7fc263b14fed307298a467d214e3045c620ef37d81e34d537321be18fc49a3b1" exitCode=0 Feb 16 17:22:11 crc kubenswrapper[4870]: I0216 17:22:11.569070 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vxqf6" event={"ID":"9c223101-34a3-41b4-b6ce-5eb5d05692ac","Type":"ContainerDied","Data":"7fc263b14fed307298a467d214e3045c620ef37d81e34d537321be18fc49a3b1"} Feb 16 17:22:11 crc kubenswrapper[4870]: I0216 17:22:11.570687 4870 generic.go:334] "Generic (PLEG): container finished" podID="ece93484-8813-4d47-a826-9a8f66cd6d78" containerID="687f030256e15077d250143c31028556dd38e9d94629746cb353d0b2be032dc2" exitCode=0 Feb 16 17:22:11 crc kubenswrapper[4870]: I0216 17:22:11.570827 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-da3e-account-create-update-q4dwp" event={"ID":"ece93484-8813-4d47-a826-9a8f66cd6d78","Type":"ContainerDied","Data":"687f030256e15077d250143c31028556dd38e9d94629746cb353d0b2be032dc2"} Feb 16 17:22:11 crc kubenswrapper[4870]: I0216 17:22:11.610004 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.296816921 podStartE2EDuration="7.609989636s" podCreationTimestamp="2026-02-16 17:22:04 +0000 UTC" firstStartedPulling="2026-02-16 17:22:05.604087083 +0000 UTC m=+1330.087551467" lastFinishedPulling="2026-02-16 17:22:10.917259798 +0000 UTC m=+1335.400724182" observedRunningTime="2026-02-16 17:22:11.608033001 +0000 UTC m=+1336.091497385" watchObservedRunningTime="2026-02-16 17:22:11.609989636 +0000 UTC m=+1336.093454020" Feb 16 17:22:11 crc kubenswrapper[4870]: I0216 17:22:11.658720 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 17:22:11 crc kubenswrapper[4870]: I0216 17:22:11.659348 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 17:22:11 crc kubenswrapper[4870]: I0216 17:22:11.745284 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 17:22:11 crc kubenswrapper[4870]: I0216 17:22:11.880618 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.313274 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jbd2m" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.470836 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22bac8e0-a77f-44bc-8011-3f676864a0e1-operator-scripts\") pod \"22bac8e0-a77f-44bc-8011-3f676864a0e1\" (UID: \"22bac8e0-a77f-44bc-8011-3f676864a0e1\") " Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.470920 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n75w2\" (UniqueName: \"kubernetes.io/projected/22bac8e0-a77f-44bc-8011-3f676864a0e1-kube-api-access-n75w2\") pod \"22bac8e0-a77f-44bc-8011-3f676864a0e1\" (UID: \"22bac8e0-a77f-44bc-8011-3f676864a0e1\") " Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.474727 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22bac8e0-a77f-44bc-8011-3f676864a0e1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22bac8e0-a77f-44bc-8011-3f676864a0e1" (UID: "22bac8e0-a77f-44bc-8011-3f676864a0e1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.479151 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22bac8e0-a77f-44bc-8011-3f676864a0e1-kube-api-access-n75w2" (OuterVolumeSpecName: "kube-api-access-n75w2") pod "22bac8e0-a77f-44bc-8011-3f676864a0e1" (UID: "22bac8e0-a77f-44bc-8011-3f676864a0e1"). InnerVolumeSpecName "kube-api-access-n75w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.574796 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22bac8e0-a77f-44bc-8011-3f676864a0e1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.574843 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n75w2\" (UniqueName: \"kubernetes.io/projected/22bac8e0-a77f-44bc-8011-3f676864a0e1-kube-api-access-n75w2\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.592810 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-663e-account-create-update-ndbzz" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.593791 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4jdm2" event={"ID":"2a0c8689-ac35-4b83-99dc-bdda1ce12ec8","Type":"ContainerDied","Data":"cf2c573b9abed36de70952b9704df2e9ac165366de90d44c9c09fb7a79bd5ab4"} Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.593819 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf2c573b9abed36de70952b9704df2e9ac165366de90d44c9c09fb7a79bd5ab4" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.599394 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4jdm2" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.599852 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-663e-account-create-update-ndbzz" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.599845 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-663e-account-create-update-ndbzz" event={"ID":"f9a05c82-dcac-4d4c-8309-8c2d6389b31b","Type":"ContainerDied","Data":"115420ffa8b1b47b147f53f1fbcb0a29c6f9c4e5d1656b91bdc9c5b7bc2064e0"} Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.599977 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="115420ffa8b1b47b147f53f1fbcb0a29c6f9c4e5d1656b91bdc9c5b7bc2064e0" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.603371 4870 generic.go:334] "Generic (PLEG): container finished" podID="d7c59a89-070d-49a5-8e0a-a8f39e4cd000" containerID="f6ed0d9b72cb585a522107dbf52d14b1c36255dab8899e38e96483cd9afd4f2c" exitCode=0 Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.603396 4870 generic.go:334] "Generic (PLEG): container finished" podID="d7c59a89-070d-49a5-8e0a-a8f39e4cd000" containerID="f2e8139cf217c7ee26e844b65cbe300034e800ff1b60db4e50c7919cc7051c75" exitCode=2 Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.603406 4870 generic.go:334] "Generic (PLEG): container finished" podID="d7c59a89-070d-49a5-8e0a-a8f39e4cd000" containerID="31896bf1967d642420d6addf6e0fc91bb85060be50487c1c869d6df557f3df09" exitCode=0 Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.603416 4870 generic.go:334] "Generic (PLEG): container finished" podID="d7c59a89-070d-49a5-8e0a-a8f39e4cd000" containerID="70d616c165c133800e91397a0de1e40aa147d506644c619bd30847021f65a878" exitCode=0 Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.603439 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7c59a89-070d-49a5-8e0a-a8f39e4cd000","Type":"ContainerDied","Data":"f6ed0d9b72cb585a522107dbf52d14b1c36255dab8899e38e96483cd9afd4f2c"} Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.603474 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7c59a89-070d-49a5-8e0a-a8f39e4cd000","Type":"ContainerDied","Data":"f2e8139cf217c7ee26e844b65cbe300034e800ff1b60db4e50c7919cc7051c75"} Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.603488 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7c59a89-070d-49a5-8e0a-a8f39e4cd000","Type":"ContainerDied","Data":"31896bf1967d642420d6addf6e0fc91bb85060be50487c1c869d6df557f3df09"} Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.603496 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7c59a89-070d-49a5-8e0a-a8f39e4cd000","Type":"ContainerDied","Data":"70d616c165c133800e91397a0de1e40aa147d506644c619bd30847021f65a878"} Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.612074 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jbd2m" event={"ID":"22bac8e0-a77f-44bc-8011-3f676864a0e1","Type":"ContainerDied","Data":"af538ec8dc0950874d90389d571b4ebced417a2efe70aaae41f8fd26dc0d239c"} Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.612123 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af538ec8dc0950874d90389d571b4ebced417a2efe70aaae41f8fd26dc0d239c" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.612620 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jbd2m" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.613389 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.613432 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.778852 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9a05c82-dcac-4d4c-8309-8c2d6389b31b-operator-scripts\") pod \"f9a05c82-dcac-4d4c-8309-8c2d6389b31b\" (UID: \"f9a05c82-dcac-4d4c-8309-8c2d6389b31b\") " Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.779334 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a0c8689-ac35-4b83-99dc-bdda1ce12ec8-operator-scripts\") pod \"2a0c8689-ac35-4b83-99dc-bdda1ce12ec8\" (UID: \"2a0c8689-ac35-4b83-99dc-bdda1ce12ec8\") " Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.779499 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwwt9\" (UniqueName: \"kubernetes.io/projected/f9a05c82-dcac-4d4c-8309-8c2d6389b31b-kube-api-access-lwwt9\") pod \"f9a05c82-dcac-4d4c-8309-8c2d6389b31b\" (UID: \"f9a05c82-dcac-4d4c-8309-8c2d6389b31b\") " Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.779564 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7mfq\" (UniqueName: \"kubernetes.io/projected/2a0c8689-ac35-4b83-99dc-bdda1ce12ec8-kube-api-access-p7mfq\") pod \"2a0c8689-ac35-4b83-99dc-bdda1ce12ec8\" (UID: \"2a0c8689-ac35-4b83-99dc-bdda1ce12ec8\") " Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.779827 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a05c82-dcac-4d4c-8309-8c2d6389b31b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9a05c82-dcac-4d4c-8309-8c2d6389b31b" (UID: "f9a05c82-dcac-4d4c-8309-8c2d6389b31b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.780911 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a0c8689-ac35-4b83-99dc-bdda1ce12ec8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a0c8689-ac35-4b83-99dc-bdda1ce12ec8" (UID: "2a0c8689-ac35-4b83-99dc-bdda1ce12ec8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.783534 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a0c8689-ac35-4b83-99dc-bdda1ce12ec8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.783561 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9a05c82-dcac-4d4c-8309-8c2d6389b31b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.792098 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a05c82-dcac-4d4c-8309-8c2d6389b31b-kube-api-access-lwwt9" (OuterVolumeSpecName: "kube-api-access-lwwt9") pod "f9a05c82-dcac-4d4c-8309-8c2d6389b31b" (UID: "f9a05c82-dcac-4d4c-8309-8c2d6389b31b"). InnerVolumeSpecName "kube-api-access-lwwt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.792403 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a0c8689-ac35-4b83-99dc-bdda1ce12ec8-kube-api-access-p7mfq" (OuterVolumeSpecName: "kube-api-access-p7mfq") pod "2a0c8689-ac35-4b83-99dc-bdda1ce12ec8" (UID: "2a0c8689-ac35-4b83-99dc-bdda1ce12ec8"). InnerVolumeSpecName "kube-api-access-p7mfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.885452 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwwt9\" (UniqueName: \"kubernetes.io/projected/f9a05c82-dcac-4d4c-8309-8c2d6389b31b-kube-api-access-lwwt9\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.885490 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7mfq\" (UniqueName: \"kubernetes.io/projected/2a0c8689-ac35-4b83-99dc-bdda1ce12ec8-kube-api-access-p7mfq\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:12 crc kubenswrapper[4870]: I0216 17:22:12.954046 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vxqf6" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.091575 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-776m2\" (UniqueName: \"kubernetes.io/projected/9c223101-34a3-41b4-b6ce-5eb5d05692ac-kube-api-access-776m2\") pod \"9c223101-34a3-41b4-b6ce-5eb5d05692ac\" (UID: \"9c223101-34a3-41b4-b6ce-5eb5d05692ac\") " Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.091892 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c223101-34a3-41b4-b6ce-5eb5d05692ac-operator-scripts\") pod \"9c223101-34a3-41b4-b6ce-5eb5d05692ac\" (UID: \"9c223101-34a3-41b4-b6ce-5eb5d05692ac\") " Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.093044 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c223101-34a3-41b4-b6ce-5eb5d05692ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c223101-34a3-41b4-b6ce-5eb5d05692ac" (UID: "9c223101-34a3-41b4-b6ce-5eb5d05692ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.107833 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c223101-34a3-41b4-b6ce-5eb5d05692ac-kube-api-access-776m2" (OuterVolumeSpecName: "kube-api-access-776m2") pod "9c223101-34a3-41b4-b6ce-5eb5d05692ac" (UID: "9c223101-34a3-41b4-b6ce-5eb5d05692ac"). InnerVolumeSpecName "kube-api-access-776m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.194366 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c223101-34a3-41b4-b6ce-5eb5d05692ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.194401 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-776m2\" (UniqueName: \"kubernetes.io/projected/9c223101-34a3-41b4-b6ce-5eb5d05692ac-kube-api-access-776m2\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.421764 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-da3e-account-create-update-q4dwp" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.429842 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-beba-account-create-update-br42d" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.508306 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8c81151-3dea-4340-821f-1e6d7df36926-operator-scripts\") pod \"d8c81151-3dea-4340-821f-1e6d7df36926\" (UID: \"d8c81151-3dea-4340-821f-1e6d7df36926\") " Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.508386 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece93484-8813-4d47-a826-9a8f66cd6d78-operator-scripts\") pod \"ece93484-8813-4d47-a826-9a8f66cd6d78\" (UID: \"ece93484-8813-4d47-a826-9a8f66cd6d78\") " Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.508523 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppn52\" (UniqueName: \"kubernetes.io/projected/ece93484-8813-4d47-a826-9a8f66cd6d78-kube-api-access-ppn52\") pod \"ece93484-8813-4d47-a826-9a8f66cd6d78\" (UID: \"ece93484-8813-4d47-a826-9a8f66cd6d78\") " Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.508581 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfwfd\" (UniqueName: \"kubernetes.io/projected/d8c81151-3dea-4340-821f-1e6d7df36926-kube-api-access-tfwfd\") pod \"d8c81151-3dea-4340-821f-1e6d7df36926\" (UID: \"d8c81151-3dea-4340-821f-1e6d7df36926\") " Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.510043 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece93484-8813-4d47-a826-9a8f66cd6d78-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ece93484-8813-4d47-a826-9a8f66cd6d78" (UID: "ece93484-8813-4d47-a826-9a8f66cd6d78"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.510421 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8c81151-3dea-4340-821f-1e6d7df36926-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8c81151-3dea-4340-821f-1e6d7df36926" (UID: "d8c81151-3dea-4340-821f-1e6d7df36926"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.517377 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8c81151-3dea-4340-821f-1e6d7df36926-kube-api-access-tfwfd" (OuterVolumeSpecName: "kube-api-access-tfwfd") pod "d8c81151-3dea-4340-821f-1e6d7df36926" (UID: "d8c81151-3dea-4340-821f-1e6d7df36926"). InnerVolumeSpecName "kube-api-access-tfwfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.545164 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ece93484-8813-4d47-a826-9a8f66cd6d78-kube-api-access-ppn52" (OuterVolumeSpecName: "kube-api-access-ppn52") pod "ece93484-8813-4d47-a826-9a8f66cd6d78" (UID: "ece93484-8813-4d47-a826-9a8f66cd6d78"). InnerVolumeSpecName "kube-api-access-ppn52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.582564 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.611011 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppn52\" (UniqueName: \"kubernetes.io/projected/ece93484-8813-4d47-a826-9a8f66cd6d78-kube-api-access-ppn52\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.611045 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfwfd\" (UniqueName: \"kubernetes.io/projected/d8c81151-3dea-4340-821f-1e6d7df36926-kube-api-access-tfwfd\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.611058 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8c81151-3dea-4340-821f-1e6d7df36926-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.611069 4870 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece93484-8813-4d47-a826-9a8f66cd6d78-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.620834 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-da3e-account-create-update-q4dwp" event={"ID":"ece93484-8813-4d47-a826-9a8f66cd6d78","Type":"ContainerDied","Data":"b8c034466da8dbfa55e2f8175cd2ab79075c07516365371d048218903c28260c"} Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.620885 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8c034466da8dbfa55e2f8175cd2ab79075c07516365371d048218903c28260c" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.620847 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-da3e-account-create-update-q4dwp" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.624489 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-beba-account-create-update-br42d" event={"ID":"d8c81151-3dea-4340-821f-1e6d7df36926","Type":"ContainerDied","Data":"c80e980e89c7ae41763c62e605f8c35e8f4e5a0897332567a674923f6fc02f8d"} Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.624549 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c80e980e89c7ae41763c62e605f8c35e8f4e5a0897332567a674923f6fc02f8d" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.624520 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-beba-account-create-update-br42d" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.628218 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7c59a89-070d-49a5-8e0a-a8f39e4cd000","Type":"ContainerDied","Data":"bd986942181184c00c272539c09493c86782e8b29e32d411e3ad1646a03cb15b"} Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.628265 4870 scope.go:117] "RemoveContainer" containerID="f6ed0d9b72cb585a522107dbf52d14b1c36255dab8899e38e96483cd9afd4f2c" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.628386 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.640865 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4jdm2" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.641550 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vxqf6" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.641607 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vxqf6" event={"ID":"9c223101-34a3-41b4-b6ce-5eb5d05692ac","Type":"ContainerDied","Data":"854b235801772b9982ea95a77271ed0be6cba651aa56d2d234fcb0521bfe1975"} Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.641641 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="854b235801772b9982ea95a77271ed0be6cba651aa56d2d234fcb0521bfe1975" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.662834 4870 scope.go:117] "RemoveContainer" containerID="f2e8139cf217c7ee26e844b65cbe300034e800ff1b60db4e50c7919cc7051c75" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.702644 4870 scope.go:117] "RemoveContainer" containerID="31896bf1967d642420d6addf6e0fc91bb85060be50487c1c869d6df557f3df09" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.712480 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-scripts\") pod \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.712605 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-sg-core-conf-yaml\") pod \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.712640 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-combined-ca-bundle\") pod \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.712733 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-run-httpd\") pod \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.712865 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-config-data\") pod \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.712901 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pz9l\" (UniqueName: \"kubernetes.io/projected/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-kube-api-access-7pz9l\") pod \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.712937 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-log-httpd\") pod \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\" (UID: \"d7c59a89-070d-49a5-8e0a-a8f39e4cd000\") " Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.715561 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d7c59a89-070d-49a5-8e0a-a8f39e4cd000" (UID: "d7c59a89-070d-49a5-8e0a-a8f39e4cd000"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.715792 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d7c59a89-070d-49a5-8e0a-a8f39e4cd000" (UID: "d7c59a89-070d-49a5-8e0a-a8f39e4cd000"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.719259 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-scripts" (OuterVolumeSpecName: "scripts") pod "d7c59a89-070d-49a5-8e0a-a8f39e4cd000" (UID: "d7c59a89-070d-49a5-8e0a-a8f39e4cd000"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.727915 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-kube-api-access-7pz9l" (OuterVolumeSpecName: "kube-api-access-7pz9l") pod "d7c59a89-070d-49a5-8e0a-a8f39e4cd000" (UID: "d7c59a89-070d-49a5-8e0a-a8f39e4cd000"). InnerVolumeSpecName "kube-api-access-7pz9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.742217 4870 scope.go:117] "RemoveContainer" containerID="70d616c165c133800e91397a0de1e40aa147d506644c619bd30847021f65a878" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.763423 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d7c59a89-070d-49a5-8e0a-a8f39e4cd000" (UID: "d7c59a89-070d-49a5-8e0a-a8f39e4cd000"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.815884 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.815908 4870 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.815917 4870 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.815925 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pz9l\" (UniqueName: \"kubernetes.io/projected/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-kube-api-access-7pz9l\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.815933 4870 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.824417 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7c59a89-070d-49a5-8e0a-a8f39e4cd000" (UID: "d7c59a89-070d-49a5-8e0a-a8f39e4cd000"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.842093 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-config-data" (OuterVolumeSpecName: "config-data") pod "d7c59a89-070d-49a5-8e0a-a8f39e4cd000" (UID: "d7c59a89-070d-49a5-8e0a-a8f39e4cd000"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.917515 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.917555 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7c59a89-070d-49a5-8e0a-a8f39e4cd000-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.967475 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.977206 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.991900 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:13 crc kubenswrapper[4870]: E0216 17:22:13.992338 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece93484-8813-4d47-a826-9a8f66cd6d78" containerName="mariadb-account-create-update" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.992354 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece93484-8813-4d47-a826-9a8f66cd6d78" containerName="mariadb-account-create-update" Feb 16 17:22:13 crc kubenswrapper[4870]: E0216 17:22:13.992369 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a0c8689-ac35-4b83-99dc-bdda1ce12ec8" containerName="mariadb-database-create" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.992377 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a0c8689-ac35-4b83-99dc-bdda1ce12ec8" containerName="mariadb-database-create" Feb 16 17:22:13 crc kubenswrapper[4870]: E0216 17:22:13.992387 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22bac8e0-a77f-44bc-8011-3f676864a0e1" containerName="mariadb-database-create" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.992393 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="22bac8e0-a77f-44bc-8011-3f676864a0e1" containerName="mariadb-database-create" Feb 16 17:22:13 crc kubenswrapper[4870]: E0216 17:22:13.992407 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c59a89-070d-49a5-8e0a-a8f39e4cd000" containerName="proxy-httpd" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.992413 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c59a89-070d-49a5-8e0a-a8f39e4cd000" containerName="proxy-httpd" Feb 16 17:22:13 crc kubenswrapper[4870]: E0216 17:22:13.992424 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c81151-3dea-4340-821f-1e6d7df36926" containerName="mariadb-account-create-update" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.992431 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c81151-3dea-4340-821f-1e6d7df36926" containerName="mariadb-account-create-update" Feb 16 17:22:13 crc kubenswrapper[4870]: E0216 17:22:13.992441 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c223101-34a3-41b4-b6ce-5eb5d05692ac" containerName="mariadb-database-create" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.992447 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c223101-34a3-41b4-b6ce-5eb5d05692ac" containerName="mariadb-database-create" Feb 16 17:22:13 crc kubenswrapper[4870]: E0216 17:22:13.992454 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a05c82-dcac-4d4c-8309-8c2d6389b31b" containerName="mariadb-account-create-update" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.992459 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a05c82-dcac-4d4c-8309-8c2d6389b31b" containerName="mariadb-account-create-update" Feb 16 17:22:13 crc kubenswrapper[4870]: E0216 17:22:13.992470 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c59a89-070d-49a5-8e0a-a8f39e4cd000" containerName="ceilometer-notification-agent" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.992475 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c59a89-070d-49a5-8e0a-a8f39e4cd000" containerName="ceilometer-notification-agent" Feb 16 17:22:13 crc kubenswrapper[4870]: E0216 17:22:13.992490 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c59a89-070d-49a5-8e0a-a8f39e4cd000" containerName="ceilometer-central-agent" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.992496 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c59a89-070d-49a5-8e0a-a8f39e4cd000" containerName="ceilometer-central-agent" Feb 16 17:22:13 crc kubenswrapper[4870]: E0216 17:22:13.992506 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c59a89-070d-49a5-8e0a-a8f39e4cd000" containerName="sg-core" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.992512 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c59a89-070d-49a5-8e0a-a8f39e4cd000" containerName="sg-core" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.992726 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a05c82-dcac-4d4c-8309-8c2d6389b31b" containerName="mariadb-account-create-update" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.992743 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece93484-8813-4d47-a826-9a8f66cd6d78" containerName="mariadb-account-create-update" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.992759 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7c59a89-070d-49a5-8e0a-a8f39e4cd000" containerName="ceilometer-central-agent" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.992767 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a0c8689-ac35-4b83-99dc-bdda1ce12ec8" containerName="mariadb-database-create" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.992779 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c223101-34a3-41b4-b6ce-5eb5d05692ac" containerName="mariadb-database-create" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.992788 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7c59a89-070d-49a5-8e0a-a8f39e4cd000" containerName="sg-core" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.992799 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="22bac8e0-a77f-44bc-8011-3f676864a0e1" containerName="mariadb-database-create" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.992813 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7c59a89-070d-49a5-8e0a-a8f39e4cd000" containerName="proxy-httpd" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.992825 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7c59a89-070d-49a5-8e0a-a8f39e4cd000" containerName="ceilometer-notification-agent" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.992840 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8c81151-3dea-4340-821f-1e6d7df36926" containerName="mariadb-account-create-update" Feb 16 17:22:13 crc kubenswrapper[4870]: I0216 17:22:13.998490 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.001880 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.003991 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.034877 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.120996 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-scripts\") pod \"ceilometer-0\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.121671 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjs8j\" (UniqueName: \"kubernetes.io/projected/487ac54a-c264-4b0a-bb51-3e743f305437-kube-api-access-gjs8j\") pod \"ceilometer-0\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.121803 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.122098 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.122246 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/487ac54a-c264-4b0a-bb51-3e743f305437-run-httpd\") pod \"ceilometer-0\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.122444 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-config-data\") pod \"ceilometer-0\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.122597 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/487ac54a-c264-4b0a-bb51-3e743f305437-log-httpd\") pod \"ceilometer-0\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.224158 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.224214 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/487ac54a-c264-4b0a-bb51-3e743f305437-run-httpd\") pod \"ceilometer-0\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.224245 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-config-data\") pod \"ceilometer-0\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.224270 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/487ac54a-c264-4b0a-bb51-3e743f305437-log-httpd\") pod \"ceilometer-0\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.224319 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-scripts\") pod \"ceilometer-0\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.224369 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjs8j\" (UniqueName: \"kubernetes.io/projected/487ac54a-c264-4b0a-bb51-3e743f305437-kube-api-access-gjs8j\") pod \"ceilometer-0\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.224402 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.225251 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/487ac54a-c264-4b0a-bb51-3e743f305437-log-httpd\") pod \"ceilometer-0\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.225278 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/487ac54a-c264-4b0a-bb51-3e743f305437-run-httpd\") pod \"ceilometer-0\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.239447 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.239686 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.240116 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-scripts\") pod \"ceilometer-0\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.245512 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjs8j\" (UniqueName: \"kubernetes.io/projected/487ac54a-c264-4b0a-bb51-3e743f305437-kube-api-access-gjs8j\") pod \"ceilometer-0\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.264637 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-config-data\") pod \"ceilometer-0\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.303287 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7c59a89-070d-49a5-8e0a-a8f39e4cd000" path="/var/lib/kubelet/pods/d7c59a89-070d-49a5-8e0a-a8f39e4cd000/volumes" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.319841 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.372134 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.652603 4870 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.653107 4870 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:22:14 crc kubenswrapper[4870]: I0216 17:22:14.851385 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:14 crc kubenswrapper[4870]: W0216 17:22:14.851535 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod487ac54a_c264_4b0a_bb51_3e743f305437.slice/crio-96d097d4e5a9e6fd61e17e254d151b597f5b89093defe6df851f519bfc0e04e3 WatchSource:0}: Error finding container 96d097d4e5a9e6fd61e17e254d151b597f5b89093defe6df851f519bfc0e04e3: Status 404 returned error can't find the container with id 96d097d4e5a9e6fd61e17e254d151b597f5b89093defe6df851f519bfc0e04e3 Feb 16 17:22:15 crc kubenswrapper[4870]: I0216 17:22:15.663972 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"487ac54a-c264-4b0a-bb51-3e743f305437","Type":"ContainerStarted","Data":"a36a636dee1d50aeba04e8828972fa10c2d16cbf48eb15808065cac3081e4546"} Feb 16 17:22:15 crc kubenswrapper[4870]: I0216 17:22:15.665405 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"487ac54a-c264-4b0a-bb51-3e743f305437","Type":"ContainerStarted","Data":"96d097d4e5a9e6fd61e17e254d151b597f5b89093defe6df851f519bfc0e04e3"} Feb 16 17:22:15 crc kubenswrapper[4870]: I0216 17:22:15.800320 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 17:22:15 crc kubenswrapper[4870]: I0216 17:22:15.800446 4870 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:22:15 crc kubenswrapper[4870]: I0216 17:22:15.802042 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 17:22:16 crc kubenswrapper[4870]: I0216 17:22:16.245419 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:16 crc kubenswrapper[4870]: I0216 17:22:16.677098 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"487ac54a-c264-4b0a-bb51-3e743f305437","Type":"ContainerStarted","Data":"b5111d23bfcdf603132d1c5f199cefc74c9e564ccde0faeb79ee956c06ef9f12"} Feb 16 17:22:16 crc kubenswrapper[4870]: I0216 17:22:16.752678 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 17:22:16 crc kubenswrapper[4870]: I0216 17:22:16.752996 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 17:22:16 crc kubenswrapper[4870]: I0216 17:22:16.797642 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 17:22:16 crc kubenswrapper[4870]: I0216 17:22:16.812803 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 17:22:17 crc kubenswrapper[4870]: I0216 17:22:17.689117 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"487ac54a-c264-4b0a-bb51-3e743f305437","Type":"ContainerStarted","Data":"ac69aafb741eb2f92fe28becd025de13b32aee982a8059be4000937f70372942"} Feb 16 17:22:17 crc kubenswrapper[4870]: I0216 17:22:17.689448 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 17:22:17 crc kubenswrapper[4870]: I0216 17:22:17.689777 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 17:22:18 crc kubenswrapper[4870]: E0216 17:22:18.229848 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:22:18 crc kubenswrapper[4870]: I0216 17:22:18.706422 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="487ac54a-c264-4b0a-bb51-3e743f305437" containerName="ceilometer-central-agent" containerID="cri-o://a36a636dee1d50aeba04e8828972fa10c2d16cbf48eb15808065cac3081e4546" gracePeriod=30 Feb 16 17:22:18 crc kubenswrapper[4870]: I0216 17:22:18.706801 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"487ac54a-c264-4b0a-bb51-3e743f305437","Type":"ContainerStarted","Data":"02f5c154001e2f745a4ab71a91fc5538b228a2b85bc383ae3b6d9b523f195c79"} Feb 16 17:22:18 crc kubenswrapper[4870]: I0216 17:22:18.706852 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 17:22:18 crc kubenswrapper[4870]: I0216 17:22:18.707264 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="487ac54a-c264-4b0a-bb51-3e743f305437" containerName="proxy-httpd" containerID="cri-o://02f5c154001e2f745a4ab71a91fc5538b228a2b85bc383ae3b6d9b523f195c79" gracePeriod=30 Feb 16 17:22:18 crc kubenswrapper[4870]: I0216 17:22:18.707329 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="487ac54a-c264-4b0a-bb51-3e743f305437" containerName="sg-core" containerID="cri-o://ac69aafb741eb2f92fe28becd025de13b32aee982a8059be4000937f70372942" gracePeriod=30 Feb 16 17:22:18 crc kubenswrapper[4870]: I0216 17:22:18.707391 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="487ac54a-c264-4b0a-bb51-3e743f305437" containerName="ceilometer-notification-agent" containerID="cri-o://b5111d23bfcdf603132d1c5f199cefc74c9e564ccde0faeb79ee956c06ef9f12" gracePeriod=30 Feb 16 17:22:18 crc kubenswrapper[4870]: I0216 17:22:18.740979 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.344047041 podStartE2EDuration="5.740939639s" podCreationTimestamp="2026-02-16 17:22:13 +0000 UTC" firstStartedPulling="2026-02-16 17:22:14.856432513 +0000 UTC m=+1339.339896897" lastFinishedPulling="2026-02-16 17:22:18.253325101 +0000 UTC m=+1342.736789495" observedRunningTime="2026-02-16 17:22:18.730056298 +0000 UTC m=+1343.213520692" watchObservedRunningTime="2026-02-16 17:22:18.740939639 +0000 UTC m=+1343.224404023" Feb 16 17:22:18 crc kubenswrapper[4870]: I0216 17:22:18.994242 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-k452h"] Feb 16 17:22:18 crc kubenswrapper[4870]: I0216 17:22:18.997427 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-k452h" Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.000229 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.000652 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.008976 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nj6b6" Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.012693 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-k452h"] Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.161633 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/849da73d-204b-4434-aadb-cb79ab8aaca8-scripts\") pod \"nova-cell0-conductor-db-sync-k452h\" (UID: \"849da73d-204b-4434-aadb-cb79ab8aaca8\") " pod="openstack/nova-cell0-conductor-db-sync-k452h" Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.161803 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849da73d-204b-4434-aadb-cb79ab8aaca8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-k452h\" (UID: \"849da73d-204b-4434-aadb-cb79ab8aaca8\") " pod="openstack/nova-cell0-conductor-db-sync-k452h" Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.161848 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtb62\" (UniqueName: \"kubernetes.io/projected/849da73d-204b-4434-aadb-cb79ab8aaca8-kube-api-access-wtb62\") pod \"nova-cell0-conductor-db-sync-k452h\" (UID: \"849da73d-204b-4434-aadb-cb79ab8aaca8\") " pod="openstack/nova-cell0-conductor-db-sync-k452h" Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.162052 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/849da73d-204b-4434-aadb-cb79ab8aaca8-config-data\") pod \"nova-cell0-conductor-db-sync-k452h\" (UID: \"849da73d-204b-4434-aadb-cb79ab8aaca8\") " pod="openstack/nova-cell0-conductor-db-sync-k452h" Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.264030 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/849da73d-204b-4434-aadb-cb79ab8aaca8-scripts\") pod \"nova-cell0-conductor-db-sync-k452h\" (UID: \"849da73d-204b-4434-aadb-cb79ab8aaca8\") " pod="openstack/nova-cell0-conductor-db-sync-k452h" Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.264167 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849da73d-204b-4434-aadb-cb79ab8aaca8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-k452h\" (UID: \"849da73d-204b-4434-aadb-cb79ab8aaca8\") " pod="openstack/nova-cell0-conductor-db-sync-k452h" Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.264203 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtb62\" (UniqueName: \"kubernetes.io/projected/849da73d-204b-4434-aadb-cb79ab8aaca8-kube-api-access-wtb62\") pod \"nova-cell0-conductor-db-sync-k452h\" (UID: \"849da73d-204b-4434-aadb-cb79ab8aaca8\") " pod="openstack/nova-cell0-conductor-db-sync-k452h" Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.264358 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/849da73d-204b-4434-aadb-cb79ab8aaca8-config-data\") pod \"nova-cell0-conductor-db-sync-k452h\" (UID: \"849da73d-204b-4434-aadb-cb79ab8aaca8\") " pod="openstack/nova-cell0-conductor-db-sync-k452h" Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.272799 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/849da73d-204b-4434-aadb-cb79ab8aaca8-config-data\") pod \"nova-cell0-conductor-db-sync-k452h\" (UID: \"849da73d-204b-4434-aadb-cb79ab8aaca8\") " pod="openstack/nova-cell0-conductor-db-sync-k452h" Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.273260 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849da73d-204b-4434-aadb-cb79ab8aaca8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-k452h\" (UID: \"849da73d-204b-4434-aadb-cb79ab8aaca8\") " pod="openstack/nova-cell0-conductor-db-sync-k452h" Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.274620 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/849da73d-204b-4434-aadb-cb79ab8aaca8-scripts\") pod \"nova-cell0-conductor-db-sync-k452h\" (UID: \"849da73d-204b-4434-aadb-cb79ab8aaca8\") " pod="openstack/nova-cell0-conductor-db-sync-k452h" Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.289453 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtb62\" (UniqueName: \"kubernetes.io/projected/849da73d-204b-4434-aadb-cb79ab8aaca8-kube-api-access-wtb62\") pod \"nova-cell0-conductor-db-sync-k452h\" (UID: \"849da73d-204b-4434-aadb-cb79ab8aaca8\") " pod="openstack/nova-cell0-conductor-db-sync-k452h" Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.318932 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-k452h" Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.733554 4870 generic.go:334] "Generic (PLEG): container finished" podID="487ac54a-c264-4b0a-bb51-3e743f305437" containerID="02f5c154001e2f745a4ab71a91fc5538b228a2b85bc383ae3b6d9b523f195c79" exitCode=0 Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.733823 4870 generic.go:334] "Generic (PLEG): container finished" podID="487ac54a-c264-4b0a-bb51-3e743f305437" containerID="ac69aafb741eb2f92fe28becd025de13b32aee982a8059be4000937f70372942" exitCode=2 Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.733834 4870 generic.go:334] "Generic (PLEG): container finished" podID="487ac54a-c264-4b0a-bb51-3e743f305437" containerID="b5111d23bfcdf603132d1c5f199cefc74c9e564ccde0faeb79ee956c06ef9f12" exitCode=0 Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.733858 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"487ac54a-c264-4b0a-bb51-3e743f305437","Type":"ContainerDied","Data":"02f5c154001e2f745a4ab71a91fc5538b228a2b85bc383ae3b6d9b523f195c79"} Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.733885 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"487ac54a-c264-4b0a-bb51-3e743f305437","Type":"ContainerDied","Data":"ac69aafb741eb2f92fe28becd025de13b32aee982a8059be4000937f70372942"} Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.733898 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"487ac54a-c264-4b0a-bb51-3e743f305437","Type":"ContainerDied","Data":"b5111d23bfcdf603132d1c5f199cefc74c9e564ccde0faeb79ee956c06ef9f12"} Feb 16 17:22:19 crc kubenswrapper[4870]: I0216 17:22:19.939284 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-k452h"] Feb 16 17:22:20 crc kubenswrapper[4870]: I0216 17:22:20.770067 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-k452h" event={"ID":"849da73d-204b-4434-aadb-cb79ab8aaca8","Type":"ContainerStarted","Data":"78093dc4e866286037afbc0e3506cfb35dea8942287922a8bb2bd339573f077a"} Feb 16 17:22:20 crc kubenswrapper[4870]: I0216 17:22:20.778901 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 17:22:20 crc kubenswrapper[4870]: I0216 17:22:20.779053 4870 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:22:20 crc kubenswrapper[4870]: I0216 17:22:20.897866 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 17:22:29 crc kubenswrapper[4870]: I0216 17:22:29.872906 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-k452h" event={"ID":"849da73d-204b-4434-aadb-cb79ab8aaca8","Type":"ContainerStarted","Data":"ba409e796a259b013e0f7ce04c14b6f5a1b41e992f53ca3c2a6372de51626b07"} Feb 16 17:22:29 crc kubenswrapper[4870]: I0216 17:22:29.896873 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-k452h" podStartSLOduration=2.746805331 podStartE2EDuration="11.896856753s" podCreationTimestamp="2026-02-16 17:22:18 +0000 UTC" firstStartedPulling="2026-02-16 17:22:19.956306394 +0000 UTC m=+1344.439770778" lastFinishedPulling="2026-02-16 17:22:29.106357816 +0000 UTC m=+1353.589822200" observedRunningTime="2026-02-16 17:22:29.890590934 +0000 UTC m=+1354.374055318" watchObservedRunningTime="2026-02-16 17:22:29.896856753 +0000 UTC m=+1354.380321137" Feb 16 17:22:30 crc kubenswrapper[4870]: I0216 17:22:30.887671 4870 generic.go:334] "Generic (PLEG): container finished" podID="487ac54a-c264-4b0a-bb51-3e743f305437" containerID="a36a636dee1d50aeba04e8828972fa10c2d16cbf48eb15808065cac3081e4546" exitCode=0 Feb 16 17:22:30 crc kubenswrapper[4870]: I0216 17:22:30.887860 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"487ac54a-c264-4b0a-bb51-3e743f305437","Type":"ContainerDied","Data":"a36a636dee1d50aeba04e8828972fa10c2d16cbf48eb15808065cac3081e4546"} Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.238402 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.316706 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/487ac54a-c264-4b0a-bb51-3e743f305437-run-httpd\") pod \"487ac54a-c264-4b0a-bb51-3e743f305437\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.316819 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/487ac54a-c264-4b0a-bb51-3e743f305437-log-httpd\") pod \"487ac54a-c264-4b0a-bb51-3e743f305437\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.317000 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjs8j\" (UniqueName: \"kubernetes.io/projected/487ac54a-c264-4b0a-bb51-3e743f305437-kube-api-access-gjs8j\") pod \"487ac54a-c264-4b0a-bb51-3e743f305437\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.317043 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-config-data\") pod \"487ac54a-c264-4b0a-bb51-3e743f305437\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.317277 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/487ac54a-c264-4b0a-bb51-3e743f305437-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "487ac54a-c264-4b0a-bb51-3e743f305437" (UID: "487ac54a-c264-4b0a-bb51-3e743f305437"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.317379 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/487ac54a-c264-4b0a-bb51-3e743f305437-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "487ac54a-c264-4b0a-bb51-3e743f305437" (UID: "487ac54a-c264-4b0a-bb51-3e743f305437"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.317105 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-combined-ca-bundle\") pod \"487ac54a-c264-4b0a-bb51-3e743f305437\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.317977 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-sg-core-conf-yaml\") pod \"487ac54a-c264-4b0a-bb51-3e743f305437\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.318010 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-scripts\") pod \"487ac54a-c264-4b0a-bb51-3e743f305437\" (UID: \"487ac54a-c264-4b0a-bb51-3e743f305437\") " Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.318718 4870 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/487ac54a-c264-4b0a-bb51-3e743f305437-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.318751 4870 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/487ac54a-c264-4b0a-bb51-3e743f305437-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.323915 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-scripts" (OuterVolumeSpecName: "scripts") pod "487ac54a-c264-4b0a-bb51-3e743f305437" (UID: "487ac54a-c264-4b0a-bb51-3e743f305437"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.324169 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/487ac54a-c264-4b0a-bb51-3e743f305437-kube-api-access-gjs8j" (OuterVolumeSpecName: "kube-api-access-gjs8j") pod "487ac54a-c264-4b0a-bb51-3e743f305437" (UID: "487ac54a-c264-4b0a-bb51-3e743f305437"). InnerVolumeSpecName "kube-api-access-gjs8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.357318 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "487ac54a-c264-4b0a-bb51-3e743f305437" (UID: "487ac54a-c264-4b0a-bb51-3e743f305437"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.416143 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "487ac54a-c264-4b0a-bb51-3e743f305437" (UID: "487ac54a-c264-4b0a-bb51-3e743f305437"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.420650 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjs8j\" (UniqueName: \"kubernetes.io/projected/487ac54a-c264-4b0a-bb51-3e743f305437-kube-api-access-gjs8j\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.420694 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.420710 4870 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.420721 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.446704 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-config-data" (OuterVolumeSpecName: "config-data") pod "487ac54a-c264-4b0a-bb51-3e743f305437" (UID: "487ac54a-c264-4b0a-bb51-3e743f305437"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.522299 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/487ac54a-c264-4b0a-bb51-3e743f305437-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.907890 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"487ac54a-c264-4b0a-bb51-3e743f305437","Type":"ContainerDied","Data":"96d097d4e5a9e6fd61e17e254d151b597f5b89093defe6df851f519bfc0e04e3"} Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.907972 4870 scope.go:117] "RemoveContainer" containerID="02f5c154001e2f745a4ab71a91fc5538b228a2b85bc383ae3b6d9b523f195c79" Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.908184 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.946735 4870 scope.go:117] "RemoveContainer" containerID="ac69aafb741eb2f92fe28becd025de13b32aee982a8059be4000937f70372942" Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.950863 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:31 crc kubenswrapper[4870]: I0216 17:22:31.985476 4870 scope.go:117] "RemoveContainer" containerID="b5111d23bfcdf603132d1c5f199cefc74c9e564ccde0faeb79ee956c06ef9f12" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.005926 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.015318 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:32 crc kubenswrapper[4870]: E0216 17:22:32.016037 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487ac54a-c264-4b0a-bb51-3e743f305437" containerName="ceilometer-notification-agent" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.016060 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="487ac54a-c264-4b0a-bb51-3e743f305437" containerName="ceilometer-notification-agent" Feb 16 17:22:32 crc kubenswrapper[4870]: E0216 17:22:32.016070 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487ac54a-c264-4b0a-bb51-3e743f305437" containerName="ceilometer-central-agent" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.016077 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="487ac54a-c264-4b0a-bb51-3e743f305437" containerName="ceilometer-central-agent" Feb 16 17:22:32 crc kubenswrapper[4870]: E0216 17:22:32.016094 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487ac54a-c264-4b0a-bb51-3e743f305437" containerName="proxy-httpd" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.016100 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="487ac54a-c264-4b0a-bb51-3e743f305437" containerName="proxy-httpd" Feb 16 17:22:32 crc kubenswrapper[4870]: E0216 17:22:32.016114 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487ac54a-c264-4b0a-bb51-3e743f305437" containerName="sg-core" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.016119 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="487ac54a-c264-4b0a-bb51-3e743f305437" containerName="sg-core" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.016329 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="487ac54a-c264-4b0a-bb51-3e743f305437" containerName="ceilometer-notification-agent" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.016341 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="487ac54a-c264-4b0a-bb51-3e743f305437" containerName="sg-core" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.016353 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="487ac54a-c264-4b0a-bb51-3e743f305437" containerName="ceilometer-central-agent" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.016381 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="487ac54a-c264-4b0a-bb51-3e743f305437" containerName="proxy-httpd" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.018345 4870 scope.go:117] "RemoveContainer" containerID="a36a636dee1d50aeba04e8828972fa10c2d16cbf48eb15808065cac3081e4546" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.018375 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.022316 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.022393 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.030709 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.135209 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-run-httpd\") pod \"ceilometer-0\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.135336 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.135381 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-scripts\") pod \"ceilometer-0\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.135403 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.135427 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-log-httpd\") pod \"ceilometer-0\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.135483 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2r9t\" (UniqueName: \"kubernetes.io/projected/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-kube-api-access-q2r9t\") pod \"ceilometer-0\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.135686 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-config-data\") pod \"ceilometer-0\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: E0216 17:22:32.229862 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.235458 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="487ac54a-c264-4b0a-bb51-3e743f305437" path="/var/lib/kubelet/pods/487ac54a-c264-4b0a-bb51-3e743f305437/volumes" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.237989 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-config-data\") pod \"ceilometer-0\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.238081 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-run-httpd\") pod \"ceilometer-0\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.238188 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.238235 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-scripts\") pod \"ceilometer-0\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.238301 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.239117 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-log-httpd\") pod \"ceilometer-0\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.239328 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-log-httpd\") pod \"ceilometer-0\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.239474 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2r9t\" (UniqueName: \"kubernetes.io/projected/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-kube-api-access-q2r9t\") pod \"ceilometer-0\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.239333 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-run-httpd\") pod \"ceilometer-0\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.243904 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-config-data\") pod \"ceilometer-0\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.279109 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.280137 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.280746 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-scripts\") pod \"ceilometer-0\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.286661 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2r9t\" (UniqueName: \"kubernetes.io/projected/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-kube-api-access-q2r9t\") pod \"ceilometer-0\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.353437 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.842553 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:22:32 crc kubenswrapper[4870]: I0216 17:22:32.920880 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ce694d3-c25b-4874-b4fe-ac2d3df6823c","Type":"ContainerStarted","Data":"e57b9be97893daf09dc62f19254c9258e2c4303c7a24a765eefc1d5c6adc5d29"} Feb 16 17:22:33 crc kubenswrapper[4870]: I0216 17:22:33.930698 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ce694d3-c25b-4874-b4fe-ac2d3df6823c","Type":"ContainerStarted","Data":"72a5f4f9a0c505bf057c114cad802e1d386eda8f9f70036461a9a0d6c5909991"} Feb 16 17:22:34 crc kubenswrapper[4870]: I0216 17:22:34.945583 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ce694d3-c25b-4874-b4fe-ac2d3df6823c","Type":"ContainerStarted","Data":"89fb7221dbe78129bb53078f7616360cbaf9865d9ef2025c23a28bbe4963f09c"} Feb 16 17:22:35 crc kubenswrapper[4870]: I0216 17:22:35.968815 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ce694d3-c25b-4874-b4fe-ac2d3df6823c","Type":"ContainerStarted","Data":"320c7eaeb32e313405a02e9d1abd551d557608766920bf3fee6b2fcf35db7cce"} Feb 16 17:22:36 crc kubenswrapper[4870]: I0216 17:22:36.979519 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ce694d3-c25b-4874-b4fe-ac2d3df6823c","Type":"ContainerStarted","Data":"14a4f534b28c3f00c16fb1440ecacf6c9b61b2d59b425f655a4be3ac0224b02f"} Feb 16 17:22:36 crc kubenswrapper[4870]: I0216 17:22:36.980343 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 17:22:37 crc kubenswrapper[4870]: I0216 17:22:37.015178 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.51615424 podStartE2EDuration="6.015156316s" podCreationTimestamp="2026-02-16 17:22:31 +0000 UTC" firstStartedPulling="2026-02-16 17:22:32.833811042 +0000 UTC m=+1357.317275426" lastFinishedPulling="2026-02-16 17:22:36.332813118 +0000 UTC m=+1360.816277502" observedRunningTime="2026-02-16 17:22:37.002170825 +0000 UTC m=+1361.485635209" watchObservedRunningTime="2026-02-16 17:22:37.015156316 +0000 UTC m=+1361.498620700" Feb 16 17:22:42 crc kubenswrapper[4870]: I0216 17:22:42.076538 4870 generic.go:334] "Generic (PLEG): container finished" podID="849da73d-204b-4434-aadb-cb79ab8aaca8" containerID="ba409e796a259b013e0f7ce04c14b6f5a1b41e992f53ca3c2a6372de51626b07" exitCode=0 Feb 16 17:22:42 crc kubenswrapper[4870]: I0216 17:22:42.076645 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-k452h" event={"ID":"849da73d-204b-4434-aadb-cb79ab8aaca8","Type":"ContainerDied","Data":"ba409e796a259b013e0f7ce04c14b6f5a1b41e992f53ca3c2a6372de51626b07"} Feb 16 17:22:43 crc kubenswrapper[4870]: I0216 17:22:43.541297 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-k452h" Feb 16 17:22:43 crc kubenswrapper[4870]: I0216 17:22:43.719170 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtb62\" (UniqueName: \"kubernetes.io/projected/849da73d-204b-4434-aadb-cb79ab8aaca8-kube-api-access-wtb62\") pod \"849da73d-204b-4434-aadb-cb79ab8aaca8\" (UID: \"849da73d-204b-4434-aadb-cb79ab8aaca8\") " Feb 16 17:22:43 crc kubenswrapper[4870]: I0216 17:22:43.719289 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/849da73d-204b-4434-aadb-cb79ab8aaca8-scripts\") pod \"849da73d-204b-4434-aadb-cb79ab8aaca8\" (UID: \"849da73d-204b-4434-aadb-cb79ab8aaca8\") " Feb 16 17:22:43 crc kubenswrapper[4870]: I0216 17:22:43.719564 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/849da73d-204b-4434-aadb-cb79ab8aaca8-config-data\") pod \"849da73d-204b-4434-aadb-cb79ab8aaca8\" (UID: \"849da73d-204b-4434-aadb-cb79ab8aaca8\") " Feb 16 17:22:43 crc kubenswrapper[4870]: I0216 17:22:43.719597 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849da73d-204b-4434-aadb-cb79ab8aaca8-combined-ca-bundle\") pod \"849da73d-204b-4434-aadb-cb79ab8aaca8\" (UID: \"849da73d-204b-4434-aadb-cb79ab8aaca8\") " Feb 16 17:22:43 crc kubenswrapper[4870]: I0216 17:22:43.727072 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/849da73d-204b-4434-aadb-cb79ab8aaca8-scripts" (OuterVolumeSpecName: "scripts") pod "849da73d-204b-4434-aadb-cb79ab8aaca8" (UID: "849da73d-204b-4434-aadb-cb79ab8aaca8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:43 crc kubenswrapper[4870]: I0216 17:22:43.727297 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/849da73d-204b-4434-aadb-cb79ab8aaca8-kube-api-access-wtb62" (OuterVolumeSpecName: "kube-api-access-wtb62") pod "849da73d-204b-4434-aadb-cb79ab8aaca8" (UID: "849da73d-204b-4434-aadb-cb79ab8aaca8"). InnerVolumeSpecName "kube-api-access-wtb62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:22:43 crc kubenswrapper[4870]: I0216 17:22:43.750540 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/849da73d-204b-4434-aadb-cb79ab8aaca8-config-data" (OuterVolumeSpecName: "config-data") pod "849da73d-204b-4434-aadb-cb79ab8aaca8" (UID: "849da73d-204b-4434-aadb-cb79ab8aaca8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:43 crc kubenswrapper[4870]: I0216 17:22:43.766509 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/849da73d-204b-4434-aadb-cb79ab8aaca8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "849da73d-204b-4434-aadb-cb79ab8aaca8" (UID: "849da73d-204b-4434-aadb-cb79ab8aaca8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:22:43 crc kubenswrapper[4870]: I0216 17:22:43.821915 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/849da73d-204b-4434-aadb-cb79ab8aaca8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:43 crc kubenswrapper[4870]: I0216 17:22:43.821978 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849da73d-204b-4434-aadb-cb79ab8aaca8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:43 crc kubenswrapper[4870]: I0216 17:22:43.821992 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtb62\" (UniqueName: \"kubernetes.io/projected/849da73d-204b-4434-aadb-cb79ab8aaca8-kube-api-access-wtb62\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:43 crc kubenswrapper[4870]: I0216 17:22:43.822001 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/849da73d-204b-4434-aadb-cb79ab8aaca8-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:22:44 crc kubenswrapper[4870]: I0216 17:22:44.104040 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-k452h" event={"ID":"849da73d-204b-4434-aadb-cb79ab8aaca8","Type":"ContainerDied","Data":"78093dc4e866286037afbc0e3506cfb35dea8942287922a8bb2bd339573f077a"} Feb 16 17:22:44 crc kubenswrapper[4870]: I0216 17:22:44.104513 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78093dc4e866286037afbc0e3506cfb35dea8942287922a8bb2bd339573f077a" Feb 16 17:22:44 crc kubenswrapper[4870]: I0216 17:22:44.104162 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-k452h" Feb 16 17:22:44 crc kubenswrapper[4870]: I0216 17:22:44.252294 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 17:22:44 crc kubenswrapper[4870]: E0216 17:22:44.253243 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849da73d-204b-4434-aadb-cb79ab8aaca8" containerName="nova-cell0-conductor-db-sync" Feb 16 17:22:44 crc kubenswrapper[4870]: I0216 17:22:44.253277 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="849da73d-204b-4434-aadb-cb79ab8aaca8" containerName="nova-cell0-conductor-db-sync" Feb 16 17:22:44 crc kubenswrapper[4870]: I0216 17:22:44.253601 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="849da73d-204b-4434-aadb-cb79ab8aaca8" containerName="nova-cell0-conductor-db-sync" Feb 16 17:22:44 crc kubenswrapper[4870]: I0216 17:22:44.254984 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 17:22:44 crc kubenswrapper[4870]: I0216 17:22:44.258633 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nj6b6" Feb 16 17:22:44 crc kubenswrapper[4870]: I0216 17:22:44.259574 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 17:22:44 crc kubenswrapper[4870]: I0216 17:22:44.292775 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 17:22:44 crc kubenswrapper[4870]: I0216 17:22:44.434760 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e0d8f9c-1ee5-40e6-9ccc-8dd626f4f720-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7e0d8f9c-1ee5-40e6-9ccc-8dd626f4f720\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:22:44 crc kubenswrapper[4870]: I0216 17:22:44.434983 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjm9k\" (UniqueName: \"kubernetes.io/projected/7e0d8f9c-1ee5-40e6-9ccc-8dd626f4f720-kube-api-access-fjm9k\") pod \"nova-cell0-conductor-0\" (UID: \"7e0d8f9c-1ee5-40e6-9ccc-8dd626f4f720\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:22:44 crc kubenswrapper[4870]: I0216 17:22:44.435210 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e0d8f9c-1ee5-40e6-9ccc-8dd626f4f720-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7e0d8f9c-1ee5-40e6-9ccc-8dd626f4f720\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:22:44 crc kubenswrapper[4870]: I0216 17:22:44.538070 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e0d8f9c-1ee5-40e6-9ccc-8dd626f4f720-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7e0d8f9c-1ee5-40e6-9ccc-8dd626f4f720\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:22:44 crc kubenswrapper[4870]: I0216 17:22:44.538181 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjm9k\" (UniqueName: \"kubernetes.io/projected/7e0d8f9c-1ee5-40e6-9ccc-8dd626f4f720-kube-api-access-fjm9k\") pod \"nova-cell0-conductor-0\" (UID: \"7e0d8f9c-1ee5-40e6-9ccc-8dd626f4f720\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:22:44 crc kubenswrapper[4870]: I0216 17:22:44.538284 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e0d8f9c-1ee5-40e6-9ccc-8dd626f4f720-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7e0d8f9c-1ee5-40e6-9ccc-8dd626f4f720\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:22:44 crc kubenswrapper[4870]: I0216 17:22:44.543688 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e0d8f9c-1ee5-40e6-9ccc-8dd626f4f720-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7e0d8f9c-1ee5-40e6-9ccc-8dd626f4f720\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:22:44 crc kubenswrapper[4870]: I0216 17:22:44.543786 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e0d8f9c-1ee5-40e6-9ccc-8dd626f4f720-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7e0d8f9c-1ee5-40e6-9ccc-8dd626f4f720\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:22:44 crc kubenswrapper[4870]: I0216 17:22:44.564164 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjm9k\" (UniqueName: \"kubernetes.io/projected/7e0d8f9c-1ee5-40e6-9ccc-8dd626f4f720-kube-api-access-fjm9k\") pod \"nova-cell0-conductor-0\" (UID: \"7e0d8f9c-1ee5-40e6-9ccc-8dd626f4f720\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:22:44 crc kubenswrapper[4870]: I0216 17:22:44.580668 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 17:22:45 crc kubenswrapper[4870]: I0216 17:22:45.104058 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 17:22:45 crc kubenswrapper[4870]: I0216 17:22:45.120653 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7e0d8f9c-1ee5-40e6-9ccc-8dd626f4f720","Type":"ContainerStarted","Data":"1cc56493e7d08263563bcc8ed7693d0ebd31580276a44865ce98e20e7f916585"} Feb 16 17:22:46 crc kubenswrapper[4870]: I0216 17:22:46.138614 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7e0d8f9c-1ee5-40e6-9ccc-8dd626f4f720","Type":"ContainerStarted","Data":"0f8fc3ecad38f2015134eb397af61fc266c0f17ae0140b591656fd34ef0b885e"} Feb 16 17:22:46 crc kubenswrapper[4870]: I0216 17:22:46.138782 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 17:22:46 crc kubenswrapper[4870]: I0216 17:22:46.158031 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.158006186 podStartE2EDuration="2.158006186s" podCreationTimestamp="2026-02-16 17:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:46.152785557 +0000 UTC m=+1370.636249961" watchObservedRunningTime="2026-02-16 17:22:46.158006186 +0000 UTC m=+1370.641470610" Feb 16 17:22:47 crc kubenswrapper[4870]: E0216 17:22:47.238693 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:22:54 crc kubenswrapper[4870]: I0216 17:22:54.630097 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.156277 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-7n7cd"] Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.158073 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7n7cd" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.162053 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.162173 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.171864 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7n7cd"] Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.297614 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2940d957-d580-4cea-8476-ace5524d8af3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7n7cd\" (UID: \"2940d957-d580-4cea-8476-ace5524d8af3\") " pod="openstack/nova-cell0-cell-mapping-7n7cd" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.297690 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2940d957-d580-4cea-8476-ace5524d8af3-config-data\") pod \"nova-cell0-cell-mapping-7n7cd\" (UID: \"2940d957-d580-4cea-8476-ace5524d8af3\") " pod="openstack/nova-cell0-cell-mapping-7n7cd" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.297757 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r77dh\" (UniqueName: \"kubernetes.io/projected/2940d957-d580-4cea-8476-ace5524d8af3-kube-api-access-r77dh\") pod \"nova-cell0-cell-mapping-7n7cd\" (UID: \"2940d957-d580-4cea-8476-ace5524d8af3\") " pod="openstack/nova-cell0-cell-mapping-7n7cd" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.297831 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2940d957-d580-4cea-8476-ace5524d8af3-scripts\") pod \"nova-cell0-cell-mapping-7n7cd\" (UID: \"2940d957-d580-4cea-8476-ace5524d8af3\") " pod="openstack/nova-cell0-cell-mapping-7n7cd" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.357099 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.403014 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.414232 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2940d957-d580-4cea-8476-ace5524d8af3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7n7cd\" (UID: \"2940d957-d580-4cea-8476-ace5524d8af3\") " pod="openstack/nova-cell0-cell-mapping-7n7cd" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.414280 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2940d957-d580-4cea-8476-ace5524d8af3-config-data\") pod \"nova-cell0-cell-mapping-7n7cd\" (UID: \"2940d957-d580-4cea-8476-ace5524d8af3\") " pod="openstack/nova-cell0-cell-mapping-7n7cd" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.414347 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r77dh\" (UniqueName: \"kubernetes.io/projected/2940d957-d580-4cea-8476-ace5524d8af3-kube-api-access-r77dh\") pod \"nova-cell0-cell-mapping-7n7cd\" (UID: \"2940d957-d580-4cea-8476-ace5524d8af3\") " pod="openstack/nova-cell0-cell-mapping-7n7cd" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.414389 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2940d957-d580-4cea-8476-ace5524d8af3-scripts\") pod \"nova-cell0-cell-mapping-7n7cd\" (UID: \"2940d957-d580-4cea-8476-ace5524d8af3\") " pod="openstack/nova-cell0-cell-mapping-7n7cd" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.420443 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.447620 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.455591 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r77dh\" (UniqueName: \"kubernetes.io/projected/2940d957-d580-4cea-8476-ace5524d8af3-kube-api-access-r77dh\") pod \"nova-cell0-cell-mapping-7n7cd\" (UID: \"2940d957-d580-4cea-8476-ace5524d8af3\") " pod="openstack/nova-cell0-cell-mapping-7n7cd" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.482755 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2940d957-d580-4cea-8476-ace5524d8af3-config-data\") pod \"nova-cell0-cell-mapping-7n7cd\" (UID: \"2940d957-d580-4cea-8476-ace5524d8af3\") " pod="openstack/nova-cell0-cell-mapping-7n7cd" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.539199 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2940d957-d580-4cea-8476-ace5524d8af3-scripts\") pod \"nova-cell0-cell-mapping-7n7cd\" (UID: \"2940d957-d580-4cea-8476-ace5524d8af3\") " pod="openstack/nova-cell0-cell-mapping-7n7cd" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.539403 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-logs\") pod \"nova-api-0\" (UID: \"de5b219a-9004-4f4d-8a8a-cd03eae15d3d\") " pod="openstack/nova-api-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.539543 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j6ll\" (UniqueName: \"kubernetes.io/projected/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-kube-api-access-8j6ll\") pod \"nova-api-0\" (UID: \"de5b219a-9004-4f4d-8a8a-cd03eae15d3d\") " pod="openstack/nova-api-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.539571 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"de5b219a-9004-4f4d-8a8a-cd03eae15d3d\") " pod="openstack/nova-api-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.539844 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-config-data\") pod \"nova-api-0\" (UID: \"de5b219a-9004-4f4d-8a8a-cd03eae15d3d\") " pod="openstack/nova-api-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.539919 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2940d957-d580-4cea-8476-ace5524d8af3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7n7cd\" (UID: \"2940d957-d580-4cea-8476-ace5524d8af3\") " pod="openstack/nova-cell0-cell-mapping-7n7cd" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.564239 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.565882 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.576475 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.620324 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.642449 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-logs\") pod \"nova-api-0\" (UID: \"de5b219a-9004-4f4d-8a8a-cd03eae15d3d\") " pod="openstack/nova-api-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.642518 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"de5b219a-9004-4f4d-8a8a-cd03eae15d3d\") " pod="openstack/nova-api-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.642539 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j6ll\" (UniqueName: \"kubernetes.io/projected/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-kube-api-access-8j6ll\") pod \"nova-api-0\" (UID: \"de5b219a-9004-4f4d-8a8a-cd03eae15d3d\") " pod="openstack/nova-api-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.642599 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-config-data\") pod \"nova-api-0\" (UID: \"de5b219a-9004-4f4d-8a8a-cd03eae15d3d\") " pod="openstack/nova-api-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.647966 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-logs\") pod \"nova-api-0\" (UID: \"de5b219a-9004-4f4d-8a8a-cd03eae15d3d\") " pod="openstack/nova-api-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.680443 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"de5b219a-9004-4f4d-8a8a-cd03eae15d3d\") " pod="openstack/nova-api-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.681999 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.686244 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.687441 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-config-data\") pod \"nova-api-0\" (UID: \"de5b219a-9004-4f4d-8a8a-cd03eae15d3d\") " pod="openstack/nova-api-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.690548 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.701049 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j6ll\" (UniqueName: \"kubernetes.io/projected/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-kube-api-access-8j6ll\") pod \"nova-api-0\" (UID: \"de5b219a-9004-4f4d-8a8a-cd03eae15d3d\") " pod="openstack/nova-api-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.721449 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.745702 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9155842-8a8a-4b08-9c59-a0d1ca601473-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9155842-8a8a-4b08-9c59-a0d1ca601473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.746174 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zvzw\" (UniqueName: \"kubernetes.io/projected/e9155842-8a8a-4b08-9c59-a0d1ca601473-kube-api-access-6zvzw\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9155842-8a8a-4b08-9c59-a0d1ca601473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.746315 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9155842-8a8a-4b08-9c59-a0d1ca601473-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9155842-8a8a-4b08-9c59-a0d1ca601473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.781796 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7n7cd" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.811639 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.820226 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.823650 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.823847 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.851332 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9155842-8a8a-4b08-9c59-a0d1ca601473-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9155842-8a8a-4b08-9c59-a0d1ca601473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.851390 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe113d9-0c75-4ccf-8615-64d80312db3b-config-data\") pod \"nova-scheduler-0\" (UID: \"9fe113d9-0c75-4ccf-8615-64d80312db3b\") " pod="openstack/nova-scheduler-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.851439 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zvzw\" (UniqueName: \"kubernetes.io/projected/e9155842-8a8a-4b08-9c59-a0d1ca601473-kube-api-access-6zvzw\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9155842-8a8a-4b08-9c59-a0d1ca601473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.851478 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fe113d9-0c75-4ccf-8615-64d80312db3b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9fe113d9-0c75-4ccf-8615-64d80312db3b\") " pod="openstack/nova-scheduler-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.851516 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9155842-8a8a-4b08-9c59-a0d1ca601473-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9155842-8a8a-4b08-9c59-a0d1ca601473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.851590 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlfn6\" (UniqueName: \"kubernetes.io/projected/9fe113d9-0c75-4ccf-8615-64d80312db3b-kube-api-access-jlfn6\") pod \"nova-scheduler-0\" (UID: \"9fe113d9-0c75-4ccf-8615-64d80312db3b\") " pod="openstack/nova-scheduler-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.880404 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.881293 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9155842-8a8a-4b08-9c59-a0d1ca601473-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9155842-8a8a-4b08-9c59-a0d1ca601473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.889925 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9155842-8a8a-4b08-9c59-a0d1ca601473-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9155842-8a8a-4b08-9c59-a0d1ca601473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.892070 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-7jflt"] Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.904576 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zvzw\" (UniqueName: \"kubernetes.io/projected/e9155842-8a8a-4b08-9c59-a0d1ca601473-kube-api-access-6zvzw\") pod \"nova-cell1-novncproxy-0\" (UID: \"e9155842-8a8a-4b08-9c59-a0d1ca601473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.907728 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.916357 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.918507 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-7jflt"] Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.954083 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe113d9-0c75-4ccf-8615-64d80312db3b-config-data\") pod \"nova-scheduler-0\" (UID: \"9fe113d9-0c75-4ccf-8615-64d80312db3b\") " pod="openstack/nova-scheduler-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.954167 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fe113d9-0c75-4ccf-8615-64d80312db3b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9fe113d9-0c75-4ccf-8615-64d80312db3b\") " pod="openstack/nova-scheduler-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.954244 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlfn6\" (UniqueName: \"kubernetes.io/projected/9fe113d9-0c75-4ccf-8615-64d80312db3b-kube-api-access-jlfn6\") pod \"nova-scheduler-0\" (UID: \"9fe113d9-0c75-4ccf-8615-64d80312db3b\") " pod="openstack/nova-scheduler-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.954348 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f06e16e-0a03-4b6d-981f-62fe7421a78a-config-data\") pod \"nova-metadata-0\" (UID: \"1f06e16e-0a03-4b6d-981f-62fe7421a78a\") " pod="openstack/nova-metadata-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.954390 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8jjr\" (UniqueName: \"kubernetes.io/projected/1f06e16e-0a03-4b6d-981f-62fe7421a78a-kube-api-access-v8jjr\") pod \"nova-metadata-0\" (UID: \"1f06e16e-0a03-4b6d-981f-62fe7421a78a\") " pod="openstack/nova-metadata-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.954424 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f06e16e-0a03-4b6d-981f-62fe7421a78a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1f06e16e-0a03-4b6d-981f-62fe7421a78a\") " pod="openstack/nova-metadata-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.954453 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f06e16e-0a03-4b6d-981f-62fe7421a78a-logs\") pod \"nova-metadata-0\" (UID: \"1f06e16e-0a03-4b6d-981f-62fe7421a78a\") " pod="openstack/nova-metadata-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.968932 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fe113d9-0c75-4ccf-8615-64d80312db3b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9fe113d9-0c75-4ccf-8615-64d80312db3b\") " pod="openstack/nova-scheduler-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.981021 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe113d9-0c75-4ccf-8615-64d80312db3b-config-data\") pod \"nova-scheduler-0\" (UID: \"9fe113d9-0c75-4ccf-8615-64d80312db3b\") " pod="openstack/nova-scheduler-0" Feb 16 17:22:55 crc kubenswrapper[4870]: I0216 17:22:55.991607 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlfn6\" (UniqueName: \"kubernetes.io/projected/9fe113d9-0c75-4ccf-8615-64d80312db3b-kube-api-access-jlfn6\") pod \"nova-scheduler-0\" (UID: \"9fe113d9-0c75-4ccf-8615-64d80312db3b\") " pod="openstack/nova-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.057607 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-config\") pod \"dnsmasq-dns-757b4f8459-7jflt\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.057691 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-7jflt\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.057746 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ldd6\" (UniqueName: \"kubernetes.io/projected/81940312-121c-4c05-97cc-d15d742518fc-kube-api-access-7ldd6\") pod \"dnsmasq-dns-757b4f8459-7jflt\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.057867 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-dns-svc\") pod \"dnsmasq-dns-757b4f8459-7jflt\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.057918 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f06e16e-0a03-4b6d-981f-62fe7421a78a-config-data\") pod \"nova-metadata-0\" (UID: \"1f06e16e-0a03-4b6d-981f-62fe7421a78a\") " pod="openstack/nova-metadata-0" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.057973 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-7jflt\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.058004 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8jjr\" (UniqueName: \"kubernetes.io/projected/1f06e16e-0a03-4b6d-981f-62fe7421a78a-kube-api-access-v8jjr\") pod \"nova-metadata-0\" (UID: \"1f06e16e-0a03-4b6d-981f-62fe7421a78a\") " pod="openstack/nova-metadata-0" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.058047 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f06e16e-0a03-4b6d-981f-62fe7421a78a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1f06e16e-0a03-4b6d-981f-62fe7421a78a\") " pod="openstack/nova-metadata-0" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.058072 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-7jflt\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.058116 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f06e16e-0a03-4b6d-981f-62fe7421a78a-logs\") pod \"nova-metadata-0\" (UID: \"1f06e16e-0a03-4b6d-981f-62fe7421a78a\") " pod="openstack/nova-metadata-0" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.058643 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f06e16e-0a03-4b6d-981f-62fe7421a78a-logs\") pod \"nova-metadata-0\" (UID: \"1f06e16e-0a03-4b6d-981f-62fe7421a78a\") " pod="openstack/nova-metadata-0" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.066143 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f06e16e-0a03-4b6d-981f-62fe7421a78a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1f06e16e-0a03-4b6d-981f-62fe7421a78a\") " pod="openstack/nova-metadata-0" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.083616 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8jjr\" (UniqueName: \"kubernetes.io/projected/1f06e16e-0a03-4b6d-981f-62fe7421a78a-kube-api-access-v8jjr\") pod \"nova-metadata-0\" (UID: \"1f06e16e-0a03-4b6d-981f-62fe7421a78a\") " pod="openstack/nova-metadata-0" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.096702 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f06e16e-0a03-4b6d-981f-62fe7421a78a-config-data\") pod \"nova-metadata-0\" (UID: \"1f06e16e-0a03-4b6d-981f-62fe7421a78a\") " pod="openstack/nova-metadata-0" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.112882 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.159546 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-dns-svc\") pod \"dnsmasq-dns-757b4f8459-7jflt\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.159609 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-7jflt\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.159657 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-7jflt\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.159700 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-config\") pod \"dnsmasq-dns-757b4f8459-7jflt\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.159737 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-7jflt\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.159769 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ldd6\" (UniqueName: \"kubernetes.io/projected/81940312-121c-4c05-97cc-d15d742518fc-kube-api-access-7ldd6\") pod \"dnsmasq-dns-757b4f8459-7jflt\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.160984 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-dns-svc\") pod \"dnsmasq-dns-757b4f8459-7jflt\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.161377 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-7jflt\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.161697 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-config\") pod \"dnsmasq-dns-757b4f8459-7jflt\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.161919 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-7jflt\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.162384 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-7jflt\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.184906 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ldd6\" (UniqueName: \"kubernetes.io/projected/81940312-121c-4c05-97cc-d15d742518fc-kube-api-access-7ldd6\") pod \"dnsmasq-dns-757b4f8459-7jflt\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.266439 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.306478 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:22:56 crc kubenswrapper[4870]: I0216 17:22:56.637485 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7n7cd"] Feb 16 17:22:56 crc kubenswrapper[4870]: W0216 17:22:56.651301 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2940d957_d580_4cea_8476_ace5524d8af3.slice/crio-5a3c4038928d36c579202247eeabba0811674920ebd3e49a91c8d699419b8a31 WatchSource:0}: Error finding container 5a3c4038928d36c579202247eeabba0811674920ebd3e49a91c8d699419b8a31: Status 404 returned error can't find the container with id 5a3c4038928d36c579202247eeabba0811674920ebd3e49a91c8d699419b8a31 Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.110914 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.186776 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fhbsn"] Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.195729 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fhbsn" Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.198909 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.201282 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.228042 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fhbsn"] Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.253111 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.301133 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-scripts\") pod \"nova-cell1-conductor-db-sync-fhbsn\" (UID: \"d6519f0c-2a4f-4712-b3f3-92effbbcec1d\") " pod="openstack/nova-cell1-conductor-db-sync-fhbsn" Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.301198 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fhbsn\" (UID: \"d6519f0c-2a4f-4712-b3f3-92effbbcec1d\") " pod="openstack/nova-cell1-conductor-db-sync-fhbsn" Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.301251 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-config-data\") pod \"nova-cell1-conductor-db-sync-fhbsn\" (UID: \"d6519f0c-2a4f-4712-b3f3-92effbbcec1d\") " pod="openstack/nova-cell1-conductor-db-sync-fhbsn" Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.301314 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvjxr\" (UniqueName: \"kubernetes.io/projected/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-kube-api-access-lvjxr\") pod \"nova-cell1-conductor-db-sync-fhbsn\" (UID: \"d6519f0c-2a4f-4712-b3f3-92effbbcec1d\") " pod="openstack/nova-cell1-conductor-db-sync-fhbsn" Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.346893 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e9155842-8a8a-4b08-9c59-a0d1ca601473","Type":"ContainerStarted","Data":"06e7ed156bd021c44c51c7d9561c8aef5551a61589c6e3e59f36f64b56687034"} Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.348853 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de5b219a-9004-4f4d-8a8a-cd03eae15d3d","Type":"ContainerStarted","Data":"acd8b86b449128bf82081624b003d7a9fa3291454d379bf1f29dbc2a2c2562ff"} Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.360875 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7n7cd" event={"ID":"2940d957-d580-4cea-8476-ace5524d8af3","Type":"ContainerStarted","Data":"4d918c8df808253acefa7999840f6875d7fe2262246e00a6fc68afeb3760040d"} Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.360928 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7n7cd" event={"ID":"2940d957-d580-4cea-8476-ace5524d8af3","Type":"ContainerStarted","Data":"5a3c4038928d36c579202247eeabba0811674920ebd3e49a91c8d699419b8a31"} Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.387115 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-7n7cd" podStartSLOduration=2.385927424 podStartE2EDuration="2.385927424s" podCreationTimestamp="2026-02-16 17:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:57.383244277 +0000 UTC m=+1381.866708661" watchObservedRunningTime="2026-02-16 17:22:57.385927424 +0000 UTC m=+1381.869391798" Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.407731 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-scripts\") pod \"nova-cell1-conductor-db-sync-fhbsn\" (UID: \"d6519f0c-2a4f-4712-b3f3-92effbbcec1d\") " pod="openstack/nova-cell1-conductor-db-sync-fhbsn" Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.409143 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fhbsn\" (UID: \"d6519f0c-2a4f-4712-b3f3-92effbbcec1d\") " pod="openstack/nova-cell1-conductor-db-sync-fhbsn" Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.409332 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-config-data\") pod \"nova-cell1-conductor-db-sync-fhbsn\" (UID: \"d6519f0c-2a4f-4712-b3f3-92effbbcec1d\") " pod="openstack/nova-cell1-conductor-db-sync-fhbsn" Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.410380 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvjxr\" (UniqueName: \"kubernetes.io/projected/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-kube-api-access-lvjxr\") pod \"nova-cell1-conductor-db-sync-fhbsn\" (UID: \"d6519f0c-2a4f-4712-b3f3-92effbbcec1d\") " pod="openstack/nova-cell1-conductor-db-sync-fhbsn" Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.416890 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fhbsn\" (UID: \"d6519f0c-2a4f-4712-b3f3-92effbbcec1d\") " pod="openstack/nova-cell1-conductor-db-sync-fhbsn" Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.417558 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-scripts\") pod \"nova-cell1-conductor-db-sync-fhbsn\" (UID: \"d6519f0c-2a4f-4712-b3f3-92effbbcec1d\") " pod="openstack/nova-cell1-conductor-db-sync-fhbsn" Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.422148 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-config-data\") pod \"nova-cell1-conductor-db-sync-fhbsn\" (UID: \"d6519f0c-2a4f-4712-b3f3-92effbbcec1d\") " pod="openstack/nova-cell1-conductor-db-sync-fhbsn" Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.435342 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvjxr\" (UniqueName: \"kubernetes.io/projected/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-kube-api-access-lvjxr\") pod \"nova-cell1-conductor-db-sync-fhbsn\" (UID: \"d6519f0c-2a4f-4712-b3f3-92effbbcec1d\") " pod="openstack/nova-cell1-conductor-db-sync-fhbsn" Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.524976 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fhbsn" Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.560550 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.600867 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-7jflt"] Feb 16 17:22:57 crc kubenswrapper[4870]: W0216 17:22:57.612871 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fe113d9_0c75_4ccf_8615_64d80312db3b.slice/crio-8a62f37dccd64995bc6612b9bd434d771f690ee7145f0f7d45c8cce869ec46e5 WatchSource:0}: Error finding container 8a62f37dccd64995bc6612b9bd434d771f690ee7145f0f7d45c8cce869ec46e5: Status 404 returned error can't find the container with id 8a62f37dccd64995bc6612b9bd434d771f690ee7145f0f7d45c8cce869ec46e5 Feb 16 17:22:57 crc kubenswrapper[4870]: I0216 17:22:57.659419 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:22:58 crc kubenswrapper[4870]: I0216 17:22:58.151464 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fhbsn"] Feb 16 17:22:58 crc kubenswrapper[4870]: W0216 17:22:58.155781 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6519f0c_2a4f_4712_b3f3_92effbbcec1d.slice/crio-c9b841db59a27829b61aab0897543f022d4683941fb3612d0015f1d2a7ca0b6e WatchSource:0}: Error finding container c9b841db59a27829b61aab0897543f022d4683941fb3612d0015f1d2a7ca0b6e: Status 404 returned error can't find the container with id c9b841db59a27829b61aab0897543f022d4683941fb3612d0015f1d2a7ca0b6e Feb 16 17:22:58 crc kubenswrapper[4870]: I0216 17:22:58.397128 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fhbsn" event={"ID":"d6519f0c-2a4f-4712-b3f3-92effbbcec1d","Type":"ContainerStarted","Data":"c9b841db59a27829b61aab0897543f022d4683941fb3612d0015f1d2a7ca0b6e"} Feb 16 17:22:58 crc kubenswrapper[4870]: I0216 17:22:58.410016 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f06e16e-0a03-4b6d-981f-62fe7421a78a","Type":"ContainerStarted","Data":"335fb2a0331eac32f0495709614302512788294c482e79374b64b534c23cabcb"} Feb 16 17:22:58 crc kubenswrapper[4870]: I0216 17:22:58.418497 4870 generic.go:334] "Generic (PLEG): container finished" podID="81940312-121c-4c05-97cc-d15d742518fc" containerID="b6572a5e7419a2d305999a2b9f99298f5534e2dfc3b992dd8eafd1ecab6efcff" exitCode=0 Feb 16 17:22:58 crc kubenswrapper[4870]: I0216 17:22:58.418990 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-7jflt" event={"ID":"81940312-121c-4c05-97cc-d15d742518fc","Type":"ContainerDied","Data":"b6572a5e7419a2d305999a2b9f99298f5534e2dfc3b992dd8eafd1ecab6efcff"} Feb 16 17:22:58 crc kubenswrapper[4870]: I0216 17:22:58.419054 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-7jflt" event={"ID":"81940312-121c-4c05-97cc-d15d742518fc","Type":"ContainerStarted","Data":"adf8c6acbb07f3ed6a49c023d7ef57ef5224b00638353fd554c24b6c9d2c4a96"} Feb 16 17:22:58 crc kubenswrapper[4870]: I0216 17:22:58.428819 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9fe113d9-0c75-4ccf-8615-64d80312db3b","Type":"ContainerStarted","Data":"8a62f37dccd64995bc6612b9bd434d771f690ee7145f0f7d45c8cce869ec46e5"} Feb 16 17:22:59 crc kubenswrapper[4870]: E0216 17:22:59.224704 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:22:59 crc kubenswrapper[4870]: I0216 17:22:59.454705 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-7jflt" event={"ID":"81940312-121c-4c05-97cc-d15d742518fc","Type":"ContainerStarted","Data":"0eb45ad9b7c918f116381e1fab55847b2281cc65a4275e742ad8df0312080529"} Feb 16 17:22:59 crc kubenswrapper[4870]: I0216 17:22:59.456082 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:22:59 crc kubenswrapper[4870]: I0216 17:22:59.468283 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fhbsn" event={"ID":"d6519f0c-2a4f-4712-b3f3-92effbbcec1d","Type":"ContainerStarted","Data":"78a154cfc26923be8ac8821860922e406c1b50690f786064b07517d494f302b9"} Feb 16 17:22:59 crc kubenswrapper[4870]: I0216 17:22:59.522082 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-7jflt" podStartSLOduration=4.52204154 podStartE2EDuration="4.52204154s" podCreationTimestamp="2026-02-16 17:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:59.509003678 +0000 UTC m=+1383.992468062" watchObservedRunningTime="2026-02-16 17:22:59.52204154 +0000 UTC m=+1384.005505924" Feb 16 17:22:59 crc kubenswrapper[4870]: I0216 17:22:59.671584 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-fhbsn" podStartSLOduration=2.671555156 podStartE2EDuration="2.671555156s" podCreationTimestamp="2026-02-16 17:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:22:59.54199231 +0000 UTC m=+1384.025456694" watchObservedRunningTime="2026-02-16 17:22:59.671555156 +0000 UTC m=+1384.155019560" Feb 16 17:22:59 crc kubenswrapper[4870]: I0216 17:22:59.739291 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:22:59 crc kubenswrapper[4870]: I0216 17:22:59.751289 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:23:02 crc kubenswrapper[4870]: I0216 17:23:02.365237 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 17:23:02 crc kubenswrapper[4870]: I0216 17:23:02.525846 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e9155842-8a8a-4b08-9c59-a0d1ca601473","Type":"ContainerStarted","Data":"2b5fa282fd4daee1481286dd1ef8e29bb2079b54ab1119ebaa1d6a69a461d65c"} Feb 16 17:23:02 crc kubenswrapper[4870]: I0216 17:23:02.526029 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="e9155842-8a8a-4b08-9c59-a0d1ca601473" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://2b5fa282fd4daee1481286dd1ef8e29bb2079b54ab1119ebaa1d6a69a461d65c" gracePeriod=30 Feb 16 17:23:02 crc kubenswrapper[4870]: I0216 17:23:02.537582 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f06e16e-0a03-4b6d-981f-62fe7421a78a","Type":"ContainerStarted","Data":"7a96596e6bc875f118bc9cbd564f2fc12c9ae632ffac3cca5c79749cbe24f967"} Feb 16 17:23:02 crc kubenswrapper[4870]: I0216 17:23:02.537637 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f06e16e-0a03-4b6d-981f-62fe7421a78a","Type":"ContainerStarted","Data":"a3165330642efdaf8740e8bc5967583a743ddbf7679a19f7ff583498b568fe1f"} Feb 16 17:23:02 crc kubenswrapper[4870]: I0216 17:23:02.537779 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1f06e16e-0a03-4b6d-981f-62fe7421a78a" containerName="nova-metadata-log" containerID="cri-o://a3165330642efdaf8740e8bc5967583a743ddbf7679a19f7ff583498b568fe1f" gracePeriod=30 Feb 16 17:23:02 crc kubenswrapper[4870]: I0216 17:23:02.538096 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1f06e16e-0a03-4b6d-981f-62fe7421a78a" containerName="nova-metadata-metadata" containerID="cri-o://7a96596e6bc875f118bc9cbd564f2fc12c9ae632ffac3cca5c79749cbe24f967" gracePeriod=30 Feb 16 17:23:02 crc kubenswrapper[4870]: I0216 17:23:02.592675 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.5907128889999997 podStartE2EDuration="7.592654009s" podCreationTimestamp="2026-02-16 17:22:55 +0000 UTC" firstStartedPulling="2026-02-16 17:22:57.192774503 +0000 UTC m=+1381.676238887" lastFinishedPulling="2026-02-16 17:23:01.194715623 +0000 UTC m=+1385.678180007" observedRunningTime="2026-02-16 17:23:02.577212108 +0000 UTC m=+1387.060676502" watchObservedRunningTime="2026-02-16 17:23:02.592654009 +0000 UTC m=+1387.076118393" Feb 16 17:23:02 crc kubenswrapper[4870]: I0216 17:23:02.606356 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de5b219a-9004-4f4d-8a8a-cd03eae15d3d","Type":"ContainerStarted","Data":"6b755386ab3f66d3745906b3e2817450762a6f5ecc08c159db945d6a27324f74"} Feb 16 17:23:02 crc kubenswrapper[4870]: I0216 17:23:02.606412 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de5b219a-9004-4f4d-8a8a-cd03eae15d3d","Type":"ContainerStarted","Data":"901bf00a165d78e88a173a053b72e7523b03f077d4a65a667f60f559566791df"} Feb 16 17:23:02 crc kubenswrapper[4870]: I0216 17:23:02.632562 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.124862138 podStartE2EDuration="7.632542627s" podCreationTimestamp="2026-02-16 17:22:55 +0000 UTC" firstStartedPulling="2026-02-16 17:22:57.666295203 +0000 UTC m=+1382.149759587" lastFinishedPulling="2026-02-16 17:23:01.173975692 +0000 UTC m=+1385.657440076" observedRunningTime="2026-02-16 17:23:02.607539224 +0000 UTC m=+1387.091003608" watchObservedRunningTime="2026-02-16 17:23:02.632542627 +0000 UTC m=+1387.116007011" Feb 16 17:23:02 crc kubenswrapper[4870]: I0216 17:23:02.653915 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.608378784 podStartE2EDuration="7.653890726s" podCreationTimestamp="2026-02-16 17:22:55 +0000 UTC" firstStartedPulling="2026-02-16 17:22:57.128381197 +0000 UTC m=+1381.611845581" lastFinishedPulling="2026-02-16 17:23:01.173893139 +0000 UTC m=+1385.657357523" observedRunningTime="2026-02-16 17:23:02.630425447 +0000 UTC m=+1387.113889851" watchObservedRunningTime="2026-02-16 17:23:02.653890726 +0000 UTC m=+1387.137355110" Feb 16 17:23:02 crc kubenswrapper[4870]: I0216 17:23:02.663489 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9fe113d9-0c75-4ccf-8615-64d80312db3b","Type":"ContainerStarted","Data":"4797d9051761e832b095227d545456cee6d2ee1335227a3fea177ab59e9a0813"} Feb 16 17:23:02 crc kubenswrapper[4870]: I0216 17:23:02.724664 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.149194163 podStartE2EDuration="7.724643385s" podCreationTimestamp="2026-02-16 17:22:55 +0000 UTC" firstStartedPulling="2026-02-16 17:22:57.617993225 +0000 UTC m=+1382.101457609" lastFinishedPulling="2026-02-16 17:23:01.193442447 +0000 UTC m=+1385.676906831" observedRunningTime="2026-02-16 17:23:02.704734527 +0000 UTC m=+1387.188198911" watchObservedRunningTime="2026-02-16 17:23:02.724643385 +0000 UTC m=+1387.208107769" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.537809 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.613248 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8jjr\" (UniqueName: \"kubernetes.io/projected/1f06e16e-0a03-4b6d-981f-62fe7421a78a-kube-api-access-v8jjr\") pod \"1f06e16e-0a03-4b6d-981f-62fe7421a78a\" (UID: \"1f06e16e-0a03-4b6d-981f-62fe7421a78a\") " Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.613432 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f06e16e-0a03-4b6d-981f-62fe7421a78a-config-data\") pod \"1f06e16e-0a03-4b6d-981f-62fe7421a78a\" (UID: \"1f06e16e-0a03-4b6d-981f-62fe7421a78a\") " Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.613481 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f06e16e-0a03-4b6d-981f-62fe7421a78a-combined-ca-bundle\") pod \"1f06e16e-0a03-4b6d-981f-62fe7421a78a\" (UID: \"1f06e16e-0a03-4b6d-981f-62fe7421a78a\") " Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.613607 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f06e16e-0a03-4b6d-981f-62fe7421a78a-logs\") pod \"1f06e16e-0a03-4b6d-981f-62fe7421a78a\" (UID: \"1f06e16e-0a03-4b6d-981f-62fe7421a78a\") " Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.614126 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f06e16e-0a03-4b6d-981f-62fe7421a78a-logs" (OuterVolumeSpecName: "logs") pod "1f06e16e-0a03-4b6d-981f-62fe7421a78a" (UID: "1f06e16e-0a03-4b6d-981f-62fe7421a78a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.614300 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f06e16e-0a03-4b6d-981f-62fe7421a78a-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.622551 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f06e16e-0a03-4b6d-981f-62fe7421a78a-kube-api-access-v8jjr" (OuterVolumeSpecName: "kube-api-access-v8jjr") pod "1f06e16e-0a03-4b6d-981f-62fe7421a78a" (UID: "1f06e16e-0a03-4b6d-981f-62fe7421a78a"). InnerVolumeSpecName "kube-api-access-v8jjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.648465 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f06e16e-0a03-4b6d-981f-62fe7421a78a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f06e16e-0a03-4b6d-981f-62fe7421a78a" (UID: "1f06e16e-0a03-4b6d-981f-62fe7421a78a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.672116 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f06e16e-0a03-4b6d-981f-62fe7421a78a-config-data" (OuterVolumeSpecName: "config-data") pod "1f06e16e-0a03-4b6d-981f-62fe7421a78a" (UID: "1f06e16e-0a03-4b6d-981f-62fe7421a78a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.680500 4870 generic.go:334] "Generic (PLEG): container finished" podID="1f06e16e-0a03-4b6d-981f-62fe7421a78a" containerID="7a96596e6bc875f118bc9cbd564f2fc12c9ae632ffac3cca5c79749cbe24f967" exitCode=0 Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.680540 4870 generic.go:334] "Generic (PLEG): container finished" podID="1f06e16e-0a03-4b6d-981f-62fe7421a78a" containerID="a3165330642efdaf8740e8bc5967583a743ddbf7679a19f7ff583498b568fe1f" exitCode=143 Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.682379 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.682397 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f06e16e-0a03-4b6d-981f-62fe7421a78a","Type":"ContainerDied","Data":"7a96596e6bc875f118bc9cbd564f2fc12c9ae632ffac3cca5c79749cbe24f967"} Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.682666 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f06e16e-0a03-4b6d-981f-62fe7421a78a","Type":"ContainerDied","Data":"a3165330642efdaf8740e8bc5967583a743ddbf7679a19f7ff583498b568fe1f"} Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.682750 4870 scope.go:117] "RemoveContainer" containerID="7a96596e6bc875f118bc9cbd564f2fc12c9ae632ffac3cca5c79749cbe24f967" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.682763 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1f06e16e-0a03-4b6d-981f-62fe7421a78a","Type":"ContainerDied","Data":"335fb2a0331eac32f0495709614302512788294c482e79374b64b534c23cabcb"} Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.717010 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f06e16e-0a03-4b6d-981f-62fe7421a78a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.717166 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f06e16e-0a03-4b6d-981f-62fe7421a78a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.717317 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8jjr\" (UniqueName: \"kubernetes.io/projected/1f06e16e-0a03-4b6d-981f-62fe7421a78a-kube-api-access-v8jjr\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.786036 4870 scope.go:117] "RemoveContainer" containerID="a3165330642efdaf8740e8bc5967583a743ddbf7679a19f7ff583498b568fe1f" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.793797 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.806323 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.825379 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:23:03 crc kubenswrapper[4870]: E0216 17:23:03.829413 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f06e16e-0a03-4b6d-981f-62fe7421a78a" containerName="nova-metadata-log" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.829442 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f06e16e-0a03-4b6d-981f-62fe7421a78a" containerName="nova-metadata-log" Feb 16 17:23:03 crc kubenswrapper[4870]: E0216 17:23:03.829459 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f06e16e-0a03-4b6d-981f-62fe7421a78a" containerName="nova-metadata-metadata" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.829465 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f06e16e-0a03-4b6d-981f-62fe7421a78a" containerName="nova-metadata-metadata" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.829836 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f06e16e-0a03-4b6d-981f-62fe7421a78a" containerName="nova-metadata-log" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.829863 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f06e16e-0a03-4b6d-981f-62fe7421a78a" containerName="nova-metadata-metadata" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.831655 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.836635 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.836875 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.837120 4870 scope.go:117] "RemoveContainer" containerID="7a96596e6bc875f118bc9cbd564f2fc12c9ae632ffac3cca5c79749cbe24f967" Feb 16 17:23:03 crc kubenswrapper[4870]: E0216 17:23:03.840110 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a96596e6bc875f118bc9cbd564f2fc12c9ae632ffac3cca5c79749cbe24f967\": container with ID starting with 7a96596e6bc875f118bc9cbd564f2fc12c9ae632ffac3cca5c79749cbe24f967 not found: ID does not exist" containerID="7a96596e6bc875f118bc9cbd564f2fc12c9ae632ffac3cca5c79749cbe24f967" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.840149 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a96596e6bc875f118bc9cbd564f2fc12c9ae632ffac3cca5c79749cbe24f967"} err="failed to get container status \"7a96596e6bc875f118bc9cbd564f2fc12c9ae632ffac3cca5c79749cbe24f967\": rpc error: code = NotFound desc = could not find container \"7a96596e6bc875f118bc9cbd564f2fc12c9ae632ffac3cca5c79749cbe24f967\": container with ID starting with 7a96596e6bc875f118bc9cbd564f2fc12c9ae632ffac3cca5c79749cbe24f967 not found: ID does not exist" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.840177 4870 scope.go:117] "RemoveContainer" containerID="a3165330642efdaf8740e8bc5967583a743ddbf7679a19f7ff583498b568fe1f" Feb 16 17:23:03 crc kubenswrapper[4870]: E0216 17:23:03.842609 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3165330642efdaf8740e8bc5967583a743ddbf7679a19f7ff583498b568fe1f\": container with ID starting with a3165330642efdaf8740e8bc5967583a743ddbf7679a19f7ff583498b568fe1f not found: ID does not exist" containerID="a3165330642efdaf8740e8bc5967583a743ddbf7679a19f7ff583498b568fe1f" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.842662 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3165330642efdaf8740e8bc5967583a743ddbf7679a19f7ff583498b568fe1f"} err="failed to get container status \"a3165330642efdaf8740e8bc5967583a743ddbf7679a19f7ff583498b568fe1f\": rpc error: code = NotFound desc = could not find container \"a3165330642efdaf8740e8bc5967583a743ddbf7679a19f7ff583498b568fe1f\": container with ID starting with a3165330642efdaf8740e8bc5967583a743ddbf7679a19f7ff583498b568fe1f not found: ID does not exist" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.842693 4870 scope.go:117] "RemoveContainer" containerID="7a96596e6bc875f118bc9cbd564f2fc12c9ae632ffac3cca5c79749cbe24f967" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.844362 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a96596e6bc875f118bc9cbd564f2fc12c9ae632ffac3cca5c79749cbe24f967"} err="failed to get container status \"7a96596e6bc875f118bc9cbd564f2fc12c9ae632ffac3cca5c79749cbe24f967\": rpc error: code = NotFound desc = could not find container \"7a96596e6bc875f118bc9cbd564f2fc12c9ae632ffac3cca5c79749cbe24f967\": container with ID starting with 7a96596e6bc875f118bc9cbd564f2fc12c9ae632ffac3cca5c79749cbe24f967 not found: ID does not exist" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.844391 4870 scope.go:117] "RemoveContainer" containerID="a3165330642efdaf8740e8bc5967583a743ddbf7679a19f7ff583498b568fe1f" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.848718 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3165330642efdaf8740e8bc5967583a743ddbf7679a19f7ff583498b568fe1f"} err="failed to get container status \"a3165330642efdaf8740e8bc5967583a743ddbf7679a19f7ff583498b568fe1f\": rpc error: code = NotFound desc = could not find container \"a3165330642efdaf8740e8bc5967583a743ddbf7679a19f7ff583498b568fe1f\": container with ID starting with a3165330642efdaf8740e8bc5967583a743ddbf7679a19f7ff583498b568fe1f not found: ID does not exist" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.854720 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.923038 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4a26481-2689-4f98-8254-5d6be5f17130-logs\") pod \"nova-metadata-0\" (UID: \"e4a26481-2689-4f98-8254-5d6be5f17130\") " pod="openstack/nova-metadata-0" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.923249 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkm62\" (UniqueName: \"kubernetes.io/projected/e4a26481-2689-4f98-8254-5d6be5f17130-kube-api-access-vkm62\") pod \"nova-metadata-0\" (UID: \"e4a26481-2689-4f98-8254-5d6be5f17130\") " pod="openstack/nova-metadata-0" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.923596 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4a26481-2689-4f98-8254-5d6be5f17130-config-data\") pod \"nova-metadata-0\" (UID: \"e4a26481-2689-4f98-8254-5d6be5f17130\") " pod="openstack/nova-metadata-0" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.923674 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4a26481-2689-4f98-8254-5d6be5f17130-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e4a26481-2689-4f98-8254-5d6be5f17130\") " pod="openstack/nova-metadata-0" Feb 16 17:23:03 crc kubenswrapper[4870]: I0216 17:23:03.923774 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a26481-2689-4f98-8254-5d6be5f17130-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e4a26481-2689-4f98-8254-5d6be5f17130\") " pod="openstack/nova-metadata-0" Feb 16 17:23:04 crc kubenswrapper[4870]: I0216 17:23:04.026106 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4a26481-2689-4f98-8254-5d6be5f17130-logs\") pod \"nova-metadata-0\" (UID: \"e4a26481-2689-4f98-8254-5d6be5f17130\") " pod="openstack/nova-metadata-0" Feb 16 17:23:04 crc kubenswrapper[4870]: I0216 17:23:04.026212 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkm62\" (UniqueName: \"kubernetes.io/projected/e4a26481-2689-4f98-8254-5d6be5f17130-kube-api-access-vkm62\") pod \"nova-metadata-0\" (UID: \"e4a26481-2689-4f98-8254-5d6be5f17130\") " pod="openstack/nova-metadata-0" Feb 16 17:23:04 crc kubenswrapper[4870]: I0216 17:23:04.026328 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4a26481-2689-4f98-8254-5d6be5f17130-config-data\") pod \"nova-metadata-0\" (UID: \"e4a26481-2689-4f98-8254-5d6be5f17130\") " pod="openstack/nova-metadata-0" Feb 16 17:23:04 crc kubenswrapper[4870]: I0216 17:23:04.026359 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4a26481-2689-4f98-8254-5d6be5f17130-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e4a26481-2689-4f98-8254-5d6be5f17130\") " pod="openstack/nova-metadata-0" Feb 16 17:23:04 crc kubenswrapper[4870]: I0216 17:23:04.026397 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a26481-2689-4f98-8254-5d6be5f17130-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e4a26481-2689-4f98-8254-5d6be5f17130\") " pod="openstack/nova-metadata-0" Feb 16 17:23:04 crc kubenswrapper[4870]: I0216 17:23:04.026548 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4a26481-2689-4f98-8254-5d6be5f17130-logs\") pod \"nova-metadata-0\" (UID: \"e4a26481-2689-4f98-8254-5d6be5f17130\") " pod="openstack/nova-metadata-0" Feb 16 17:23:04 crc kubenswrapper[4870]: I0216 17:23:04.030699 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a26481-2689-4f98-8254-5d6be5f17130-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e4a26481-2689-4f98-8254-5d6be5f17130\") " pod="openstack/nova-metadata-0" Feb 16 17:23:04 crc kubenswrapper[4870]: I0216 17:23:04.030764 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4a26481-2689-4f98-8254-5d6be5f17130-config-data\") pod \"nova-metadata-0\" (UID: \"e4a26481-2689-4f98-8254-5d6be5f17130\") " pod="openstack/nova-metadata-0" Feb 16 17:23:04 crc kubenswrapper[4870]: I0216 17:23:04.033753 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4a26481-2689-4f98-8254-5d6be5f17130-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e4a26481-2689-4f98-8254-5d6be5f17130\") " pod="openstack/nova-metadata-0" Feb 16 17:23:04 crc kubenswrapper[4870]: I0216 17:23:04.044162 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkm62\" (UniqueName: \"kubernetes.io/projected/e4a26481-2689-4f98-8254-5d6be5f17130-kube-api-access-vkm62\") pod \"nova-metadata-0\" (UID: \"e4a26481-2689-4f98-8254-5d6be5f17130\") " pod="openstack/nova-metadata-0" Feb 16 17:23:04 crc kubenswrapper[4870]: I0216 17:23:04.149215 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:23:04 crc kubenswrapper[4870]: I0216 17:23:04.236678 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f06e16e-0a03-4b6d-981f-62fe7421a78a" path="/var/lib/kubelet/pods/1f06e16e-0a03-4b6d-981f-62fe7421a78a/volumes" Feb 16 17:23:04 crc kubenswrapper[4870]: I0216 17:23:04.637990 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:23:04 crc kubenswrapper[4870]: I0216 17:23:04.694652 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4a26481-2689-4f98-8254-5d6be5f17130","Type":"ContainerStarted","Data":"2f46b727a2ec1808404bac9338d8965357b737cff9d445556f8f83d19a6325de"} Feb 16 17:23:05 crc kubenswrapper[4870]: I0216 17:23:05.742154 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4a26481-2689-4f98-8254-5d6be5f17130","Type":"ContainerStarted","Data":"6f55430c41e8ead7753507a2512b5b48834fc279113087829e2879e0568d63e9"} Feb 16 17:23:05 crc kubenswrapper[4870]: I0216 17:23:05.742506 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4a26481-2689-4f98-8254-5d6be5f17130","Type":"ContainerStarted","Data":"ddb65b25b83ecbfc8ca33223e1ea38d2e04c9df0e4f0bd5115dccc17e5c39e5a"} Feb 16 17:23:05 crc kubenswrapper[4870]: I0216 17:23:05.786734 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.786707359 podStartE2EDuration="2.786707359s" podCreationTimestamp="2026-02-16 17:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:05.767383888 +0000 UTC m=+1390.250848282" watchObservedRunningTime="2026-02-16 17:23:05.786707359 +0000 UTC m=+1390.270171743" Feb 16 17:23:05 crc kubenswrapper[4870]: I0216 17:23:05.884551 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 17:23:05 crc kubenswrapper[4870]: I0216 17:23:05.884592 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 17:23:05 crc kubenswrapper[4870]: I0216 17:23:05.917254 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:06 crc kubenswrapper[4870]: I0216 17:23:06.113798 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 17:23:06 crc kubenswrapper[4870]: I0216 17:23:06.113855 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 17:23:06 crc kubenswrapper[4870]: I0216 17:23:06.179651 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 17:23:06 crc kubenswrapper[4870]: I0216 17:23:06.310345 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:23:06 crc kubenswrapper[4870]: I0216 17:23:06.430585 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6hrp5"] Feb 16 17:23:06 crc kubenswrapper[4870]: I0216 17:23:06.431145 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" podUID="3d29aa8d-0873-48ed-8f06-665b855a6037" containerName="dnsmasq-dns" containerID="cri-o://f8638965369202d74398f6a4841bc97d6a6050afeabe91a1292bf05e9b7ff318" gracePeriod=10 Feb 16 17:23:06 crc kubenswrapper[4870]: I0216 17:23:06.761658 4870 generic.go:334] "Generic (PLEG): container finished" podID="2940d957-d580-4cea-8476-ace5524d8af3" containerID="4d918c8df808253acefa7999840f6875d7fe2262246e00a6fc68afeb3760040d" exitCode=0 Feb 16 17:23:06 crc kubenswrapper[4870]: I0216 17:23:06.761741 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7n7cd" event={"ID":"2940d957-d580-4cea-8476-ace5524d8af3","Type":"ContainerDied","Data":"4d918c8df808253acefa7999840f6875d7fe2262246e00a6fc68afeb3760040d"} Feb 16 17:23:06 crc kubenswrapper[4870]: I0216 17:23:06.767912 4870 generic.go:334] "Generic (PLEG): container finished" podID="3d29aa8d-0873-48ed-8f06-665b855a6037" containerID="f8638965369202d74398f6a4841bc97d6a6050afeabe91a1292bf05e9b7ff318" exitCode=0 Feb 16 17:23:06 crc kubenswrapper[4870]: I0216 17:23:06.769387 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" event={"ID":"3d29aa8d-0873-48ed-8f06-665b855a6037","Type":"ContainerDied","Data":"f8638965369202d74398f6a4841bc97d6a6050afeabe91a1292bf05e9b7ff318"} Feb 16 17:23:06 crc kubenswrapper[4870]: I0216 17:23:06.824507 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 17:23:06 crc kubenswrapper[4870]: I0216 17:23:06.974292 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="de5b219a-9004-4f4d-8a8a-cd03eae15d3d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.210:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:23:06 crc kubenswrapper[4870]: I0216 17:23:06.974397 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="de5b219a-9004-4f4d-8a8a-cd03eae15d3d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.210:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.195159 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.317248 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-ovsdbserver-sb\") pod \"3d29aa8d-0873-48ed-8f06-665b855a6037\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.317377 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-ovsdbserver-nb\") pod \"3d29aa8d-0873-48ed-8f06-665b855a6037\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.317470 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-config\") pod \"3d29aa8d-0873-48ed-8f06-665b855a6037\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.317623 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-dns-swift-storage-0\") pod \"3d29aa8d-0873-48ed-8f06-665b855a6037\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.317764 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7v7c\" (UniqueName: \"kubernetes.io/projected/3d29aa8d-0873-48ed-8f06-665b855a6037-kube-api-access-w7v7c\") pod \"3d29aa8d-0873-48ed-8f06-665b855a6037\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.317876 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-dns-svc\") pod \"3d29aa8d-0873-48ed-8f06-665b855a6037\" (UID: \"3d29aa8d-0873-48ed-8f06-665b855a6037\") " Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.325980 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d29aa8d-0873-48ed-8f06-665b855a6037-kube-api-access-w7v7c" (OuterVolumeSpecName: "kube-api-access-w7v7c") pod "3d29aa8d-0873-48ed-8f06-665b855a6037" (UID: "3d29aa8d-0873-48ed-8f06-665b855a6037"). InnerVolumeSpecName "kube-api-access-w7v7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.421509 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7v7c\" (UniqueName: \"kubernetes.io/projected/3d29aa8d-0873-48ed-8f06-665b855a6037-kube-api-access-w7v7c\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.424304 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3d29aa8d-0873-48ed-8f06-665b855a6037" (UID: "3d29aa8d-0873-48ed-8f06-665b855a6037"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.437938 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3d29aa8d-0873-48ed-8f06-665b855a6037" (UID: "3d29aa8d-0873-48ed-8f06-665b855a6037"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.446918 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3d29aa8d-0873-48ed-8f06-665b855a6037" (UID: "3d29aa8d-0873-48ed-8f06-665b855a6037"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.450155 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3d29aa8d-0873-48ed-8f06-665b855a6037" (UID: "3d29aa8d-0873-48ed-8f06-665b855a6037"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.501797 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-config" (OuterVolumeSpecName: "config") pod "3d29aa8d-0873-48ed-8f06-665b855a6037" (UID: "3d29aa8d-0873-48ed-8f06-665b855a6037"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.524815 4870 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.524889 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.524904 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.524915 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.524926 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d29aa8d-0873-48ed-8f06-665b855a6037-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.783599 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.783666 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6hrp5" event={"ID":"3d29aa8d-0873-48ed-8f06-665b855a6037","Type":"ContainerDied","Data":"d422f9aaefb9701f351ff281fc01eee9e7c559bc275f4c75944ad959c2221586"} Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.783792 4870 scope.go:117] "RemoveContainer" containerID="f8638965369202d74398f6a4841bc97d6a6050afeabe91a1292bf05e9b7ff318" Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.792478 4870 generic.go:334] "Generic (PLEG): container finished" podID="d6519f0c-2a4f-4712-b3f3-92effbbcec1d" containerID="78a154cfc26923be8ac8821860922e406c1b50690f786064b07517d494f302b9" exitCode=0 Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.792786 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fhbsn" event={"ID":"d6519f0c-2a4f-4712-b3f3-92effbbcec1d","Type":"ContainerDied","Data":"78a154cfc26923be8ac8821860922e406c1b50690f786064b07517d494f302b9"} Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.848927 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6hrp5"] Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.851649 4870 scope.go:117] "RemoveContainer" containerID="ffc0513f7e308113478d014509a21f823897f92b8a18ae61519188507558b19f" Feb 16 17:23:07 crc kubenswrapper[4870]: I0216 17:23:07.861278 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6hrp5"] Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.238333 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d29aa8d-0873-48ed-8f06-665b855a6037" path="/var/lib/kubelet/pods/3d29aa8d-0873-48ed-8f06-665b855a6037/volumes" Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.264189 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.264452 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="086322f7-5554-4a10-a1be-10622174e27f" containerName="kube-state-metrics" containerID="cri-o://e33da650737dea9303ca3d6a36810621f15b36aa7413a146a05365d2973730fc" gracePeriod=30 Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.577599 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7n7cd" Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.654269 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r77dh\" (UniqueName: \"kubernetes.io/projected/2940d957-d580-4cea-8476-ace5524d8af3-kube-api-access-r77dh\") pod \"2940d957-d580-4cea-8476-ace5524d8af3\" (UID: \"2940d957-d580-4cea-8476-ace5524d8af3\") " Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.655441 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2940d957-d580-4cea-8476-ace5524d8af3-combined-ca-bundle\") pod \"2940d957-d580-4cea-8476-ace5524d8af3\" (UID: \"2940d957-d580-4cea-8476-ace5524d8af3\") " Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.655495 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2940d957-d580-4cea-8476-ace5524d8af3-scripts\") pod \"2940d957-d580-4cea-8476-ace5524d8af3\" (UID: \"2940d957-d580-4cea-8476-ace5524d8af3\") " Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.655565 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2940d957-d580-4cea-8476-ace5524d8af3-config-data\") pod \"2940d957-d580-4cea-8476-ace5524d8af3\" (UID: \"2940d957-d580-4cea-8476-ace5524d8af3\") " Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.661741 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2940d957-d580-4cea-8476-ace5524d8af3-kube-api-access-r77dh" (OuterVolumeSpecName: "kube-api-access-r77dh") pod "2940d957-d580-4cea-8476-ace5524d8af3" (UID: "2940d957-d580-4cea-8476-ace5524d8af3"). InnerVolumeSpecName "kube-api-access-r77dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.722421 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2940d957-d580-4cea-8476-ace5524d8af3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2940d957-d580-4cea-8476-ace5524d8af3" (UID: "2940d957-d580-4cea-8476-ace5524d8af3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.738787 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2940d957-d580-4cea-8476-ace5524d8af3-scripts" (OuterVolumeSpecName: "scripts") pod "2940d957-d580-4cea-8476-ace5524d8af3" (UID: "2940d957-d580-4cea-8476-ace5524d8af3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.740993 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2940d957-d580-4cea-8476-ace5524d8af3-config-data" (OuterVolumeSpecName: "config-data") pod "2940d957-d580-4cea-8476-ace5524d8af3" (UID: "2940d957-d580-4cea-8476-ace5524d8af3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.760828 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2940d957-d580-4cea-8476-ace5524d8af3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.760890 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2940d957-d580-4cea-8476-ace5524d8af3-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.760903 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2940d957-d580-4cea-8476-ace5524d8af3-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.760912 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r77dh\" (UniqueName: \"kubernetes.io/projected/2940d957-d580-4cea-8476-ace5524d8af3-kube-api-access-r77dh\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.823239 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7n7cd" event={"ID":"2940d957-d580-4cea-8476-ace5524d8af3","Type":"ContainerDied","Data":"5a3c4038928d36c579202247eeabba0811674920ebd3e49a91c8d699419b8a31"} Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.823281 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a3c4038928d36c579202247eeabba0811674920ebd3e49a91c8d699419b8a31" Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.823357 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7n7cd" Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.829058 4870 generic.go:334] "Generic (PLEG): container finished" podID="086322f7-5554-4a10-a1be-10622174e27f" containerID="e33da650737dea9303ca3d6a36810621f15b36aa7413a146a05365d2973730fc" exitCode=2 Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.829140 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"086322f7-5554-4a10-a1be-10622174e27f","Type":"ContainerDied","Data":"e33da650737dea9303ca3d6a36810621f15b36aa7413a146a05365d2973730fc"} Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.973324 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.973576 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="de5b219a-9004-4f4d-8a8a-cd03eae15d3d" containerName="nova-api-log" containerID="cri-o://901bf00a165d78e88a173a053b72e7523b03f077d4a65a667f60f559566791df" gracePeriod=30 Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.973746 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="de5b219a-9004-4f4d-8a8a-cd03eae15d3d" containerName="nova-api-api" containerID="cri-o://6b755386ab3f66d3745906b3e2817450762a6f5ecc08c159db945d6a27324f74" gracePeriod=30 Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.980107 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.984523 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:23:08 crc kubenswrapper[4870]: I0216 17:23:08.984742 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9fe113d9-0c75-4ccf-8615-64d80312db3b" containerName="nova-scheduler-scheduler" containerID="cri-o://4797d9051761e832b095227d545456cee6d2ee1335227a3fea177ab59e9a0813" gracePeriod=30 Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.036331 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.037477 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e4a26481-2689-4f98-8254-5d6be5f17130" containerName="nova-metadata-log" containerID="cri-o://ddb65b25b83ecbfc8ca33223e1ea38d2e04c9df0e4f0bd5115dccc17e5c39e5a" gracePeriod=30 Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.037818 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e4a26481-2689-4f98-8254-5d6be5f17130" containerName="nova-metadata-metadata" containerID="cri-o://6f55430c41e8ead7753507a2512b5b48834fc279113087829e2879e0568d63e9" gracePeriod=30 Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.067200 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkxfq\" (UniqueName: \"kubernetes.io/projected/086322f7-5554-4a10-a1be-10622174e27f-kube-api-access-gkxfq\") pod \"086322f7-5554-4a10-a1be-10622174e27f\" (UID: \"086322f7-5554-4a10-a1be-10622174e27f\") " Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.070927 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/086322f7-5554-4a10-a1be-10622174e27f-kube-api-access-gkxfq" (OuterVolumeSpecName: "kube-api-access-gkxfq") pod "086322f7-5554-4a10-a1be-10622174e27f" (UID: "086322f7-5554-4a10-a1be-10622174e27f"). InnerVolumeSpecName "kube-api-access-gkxfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.149914 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.149977 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.176493 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkxfq\" (UniqueName: \"kubernetes.io/projected/086322f7-5554-4a10-a1be-10622174e27f-kube-api-access-gkxfq\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.257108 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fhbsn" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.379906 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-config-data\") pod \"d6519f0c-2a4f-4712-b3f3-92effbbcec1d\" (UID: \"d6519f0c-2a4f-4712-b3f3-92effbbcec1d\") " Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.380097 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvjxr\" (UniqueName: \"kubernetes.io/projected/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-kube-api-access-lvjxr\") pod \"d6519f0c-2a4f-4712-b3f3-92effbbcec1d\" (UID: \"d6519f0c-2a4f-4712-b3f3-92effbbcec1d\") " Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.380139 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-scripts\") pod \"d6519f0c-2a4f-4712-b3f3-92effbbcec1d\" (UID: \"d6519f0c-2a4f-4712-b3f3-92effbbcec1d\") " Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.380178 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-combined-ca-bundle\") pod \"d6519f0c-2a4f-4712-b3f3-92effbbcec1d\" (UID: \"d6519f0c-2a4f-4712-b3f3-92effbbcec1d\") " Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.384179 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-kube-api-access-lvjxr" (OuterVolumeSpecName: "kube-api-access-lvjxr") pod "d6519f0c-2a4f-4712-b3f3-92effbbcec1d" (UID: "d6519f0c-2a4f-4712-b3f3-92effbbcec1d"). InnerVolumeSpecName "kube-api-access-lvjxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.387906 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-scripts" (OuterVolumeSpecName: "scripts") pod "d6519f0c-2a4f-4712-b3f3-92effbbcec1d" (UID: "d6519f0c-2a4f-4712-b3f3-92effbbcec1d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.423834 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6519f0c-2a4f-4712-b3f3-92effbbcec1d" (UID: "d6519f0c-2a4f-4712-b3f3-92effbbcec1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.442727 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-config-data" (OuterVolumeSpecName: "config-data") pod "d6519f0c-2a4f-4712-b3f3-92effbbcec1d" (UID: "d6519f0c-2a4f-4712-b3f3-92effbbcec1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.482628 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.482656 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.482666 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.482674 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvjxr\" (UniqueName: \"kubernetes.io/projected/d6519f0c-2a4f-4712-b3f3-92effbbcec1d-kube-api-access-lvjxr\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.657131 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.688557 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4a26481-2689-4f98-8254-5d6be5f17130-logs\") pod \"e4a26481-2689-4f98-8254-5d6be5f17130\" (UID: \"e4a26481-2689-4f98-8254-5d6be5f17130\") " Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.688719 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4a26481-2689-4f98-8254-5d6be5f17130-config-data\") pod \"e4a26481-2689-4f98-8254-5d6be5f17130\" (UID: \"e4a26481-2689-4f98-8254-5d6be5f17130\") " Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.689086 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4a26481-2689-4f98-8254-5d6be5f17130-logs" (OuterVolumeSpecName: "logs") pod "e4a26481-2689-4f98-8254-5d6be5f17130" (UID: "e4a26481-2689-4f98-8254-5d6be5f17130"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.688760 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4a26481-2689-4f98-8254-5d6be5f17130-nova-metadata-tls-certs\") pod \"e4a26481-2689-4f98-8254-5d6be5f17130\" (UID: \"e4a26481-2689-4f98-8254-5d6be5f17130\") " Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.689375 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkm62\" (UniqueName: \"kubernetes.io/projected/e4a26481-2689-4f98-8254-5d6be5f17130-kube-api-access-vkm62\") pod \"e4a26481-2689-4f98-8254-5d6be5f17130\" (UID: \"e4a26481-2689-4f98-8254-5d6be5f17130\") " Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.689396 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a26481-2689-4f98-8254-5d6be5f17130-combined-ca-bundle\") pod \"e4a26481-2689-4f98-8254-5d6be5f17130\" (UID: \"e4a26481-2689-4f98-8254-5d6be5f17130\") " Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.689986 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4a26481-2689-4f98-8254-5d6be5f17130-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.693587 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a26481-2689-4f98-8254-5d6be5f17130-kube-api-access-vkm62" (OuterVolumeSpecName: "kube-api-access-vkm62") pod "e4a26481-2689-4f98-8254-5d6be5f17130" (UID: "e4a26481-2689-4f98-8254-5d6be5f17130"). InnerVolumeSpecName "kube-api-access-vkm62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.720631 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4a26481-2689-4f98-8254-5d6be5f17130-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4a26481-2689-4f98-8254-5d6be5f17130" (UID: "e4a26481-2689-4f98-8254-5d6be5f17130"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.739680 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4a26481-2689-4f98-8254-5d6be5f17130-config-data" (OuterVolumeSpecName: "config-data") pod "e4a26481-2689-4f98-8254-5d6be5f17130" (UID: "e4a26481-2689-4f98-8254-5d6be5f17130"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.760097 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4a26481-2689-4f98-8254-5d6be5f17130-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e4a26481-2689-4f98-8254-5d6be5f17130" (UID: "e4a26481-2689-4f98-8254-5d6be5f17130"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.792367 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4a26481-2689-4f98-8254-5d6be5f17130-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.792428 4870 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4a26481-2689-4f98-8254-5d6be5f17130-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.792442 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkm62\" (UniqueName: \"kubernetes.io/projected/e4a26481-2689-4f98-8254-5d6be5f17130-kube-api-access-vkm62\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.792460 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a26481-2689-4f98-8254-5d6be5f17130-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.884043 4870 generic.go:334] "Generic (PLEG): container finished" podID="9fe113d9-0c75-4ccf-8615-64d80312db3b" containerID="4797d9051761e832b095227d545456cee6d2ee1335227a3fea177ab59e9a0813" exitCode=0 Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.884219 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9fe113d9-0c75-4ccf-8615-64d80312db3b","Type":"ContainerDied","Data":"4797d9051761e832b095227d545456cee6d2ee1335227a3fea177ab59e9a0813"} Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.898761 4870 generic.go:334] "Generic (PLEG): container finished" podID="e4a26481-2689-4f98-8254-5d6be5f17130" containerID="6f55430c41e8ead7753507a2512b5b48834fc279113087829e2879e0568d63e9" exitCode=0 Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.898809 4870 generic.go:334] "Generic (PLEG): container finished" podID="e4a26481-2689-4f98-8254-5d6be5f17130" containerID="ddb65b25b83ecbfc8ca33223e1ea38d2e04c9df0e4f0bd5115dccc17e5c39e5a" exitCode=143 Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.898900 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4a26481-2689-4f98-8254-5d6be5f17130","Type":"ContainerDied","Data":"6f55430c41e8ead7753507a2512b5b48834fc279113087829e2879e0568d63e9"} Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.899064 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4a26481-2689-4f98-8254-5d6be5f17130","Type":"ContainerDied","Data":"ddb65b25b83ecbfc8ca33223e1ea38d2e04c9df0e4f0bd5115dccc17e5c39e5a"} Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.899085 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4a26481-2689-4f98-8254-5d6be5f17130","Type":"ContainerDied","Data":"2f46b727a2ec1808404bac9338d8965357b737cff9d445556f8f83d19a6325de"} Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.899105 4870 scope.go:117] "RemoveContainer" containerID="6f55430c41e8ead7753507a2512b5b48834fc279113087829e2879e0568d63e9" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.899297 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.912065 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"086322f7-5554-4a10-a1be-10622174e27f","Type":"ContainerDied","Data":"c6c9c80718dba9c43aab8c20e2da28eb608125c186d5b1e036d2df832d18c2ab"} Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.912496 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.912983 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 17:23:09 crc kubenswrapper[4870]: E0216 17:23:09.913040 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6519f0c-2a4f-4712-b3f3-92effbbcec1d" containerName="nova-cell1-conductor-db-sync" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.913058 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6519f0c-2a4f-4712-b3f3-92effbbcec1d" containerName="nova-cell1-conductor-db-sync" Feb 16 17:23:09 crc kubenswrapper[4870]: E0216 17:23:09.913082 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d29aa8d-0873-48ed-8f06-665b855a6037" containerName="init" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.913091 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d29aa8d-0873-48ed-8f06-665b855a6037" containerName="init" Feb 16 17:23:09 crc kubenswrapper[4870]: E0216 17:23:09.913116 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a26481-2689-4f98-8254-5d6be5f17130" containerName="nova-metadata-log" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.913125 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a26481-2689-4f98-8254-5d6be5f17130" containerName="nova-metadata-log" Feb 16 17:23:09 crc kubenswrapper[4870]: E0216 17:23:09.913144 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d29aa8d-0873-48ed-8f06-665b855a6037" containerName="dnsmasq-dns" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.913151 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d29aa8d-0873-48ed-8f06-665b855a6037" containerName="dnsmasq-dns" Feb 16 17:23:09 crc kubenswrapper[4870]: E0216 17:23:09.913170 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2940d957-d580-4cea-8476-ace5524d8af3" containerName="nova-manage" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.913177 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="2940d957-d580-4cea-8476-ace5524d8af3" containerName="nova-manage" Feb 16 17:23:09 crc kubenswrapper[4870]: E0216 17:23:09.913192 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086322f7-5554-4a10-a1be-10622174e27f" containerName="kube-state-metrics" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.913199 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="086322f7-5554-4a10-a1be-10622174e27f" containerName="kube-state-metrics" Feb 16 17:23:09 crc kubenswrapper[4870]: E0216 17:23:09.913215 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a26481-2689-4f98-8254-5d6be5f17130" containerName="nova-metadata-metadata" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.913223 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a26481-2689-4f98-8254-5d6be5f17130" containerName="nova-metadata-metadata" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.913445 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="086322f7-5554-4a10-a1be-10622174e27f" containerName="kube-state-metrics" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.913467 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d29aa8d-0873-48ed-8f06-665b855a6037" containerName="dnsmasq-dns" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.913490 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4a26481-2689-4f98-8254-5d6be5f17130" containerName="nova-metadata-metadata" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.913502 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6519f0c-2a4f-4712-b3f3-92effbbcec1d" containerName="nova-cell1-conductor-db-sync" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.913516 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4a26481-2689-4f98-8254-5d6be5f17130" containerName="nova-metadata-log" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.913527 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="2940d957-d580-4cea-8476-ace5524d8af3" containerName="nova-manage" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.914514 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.930866 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.943431 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fhbsn" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.943462 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fhbsn" event={"ID":"d6519f0c-2a4f-4712-b3f3-92effbbcec1d","Type":"ContainerDied","Data":"c9b841db59a27829b61aab0897543f022d4683941fb3612d0015f1d2a7ca0b6e"} Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.943557 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9b841db59a27829b61aab0897543f022d4683941fb3612d0015f1d2a7ca0b6e" Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.959517 4870 generic.go:334] "Generic (PLEG): container finished" podID="de5b219a-9004-4f4d-8a8a-cd03eae15d3d" containerID="901bf00a165d78e88a173a053b72e7523b03f077d4a65a667f60f559566791df" exitCode=143 Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.959579 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de5b219a-9004-4f4d-8a8a-cd03eae15d3d","Type":"ContainerDied","Data":"901bf00a165d78e88a173a053b72e7523b03f077d4a65a667f60f559566791df"} Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.993624 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:23:09 crc kubenswrapper[4870]: I0216 17:23:09.994711 4870 scope.go:117] "RemoveContainer" containerID="ddb65b25b83ecbfc8ca33223e1ea38d2e04c9df0e4f0bd5115dccc17e5c39e5a" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.003045 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d31404-683b-4ac8-9d85-7b5425843395-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"11d31404-683b-4ac8-9d85-7b5425843395\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.003360 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pqjx\" (UniqueName: \"kubernetes.io/projected/11d31404-683b-4ac8-9d85-7b5425843395-kube-api-access-7pqjx\") pod \"nova-cell1-conductor-0\" (UID: \"11d31404-683b-4ac8-9d85-7b5425843395\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.003390 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11d31404-683b-4ac8-9d85-7b5425843395-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"11d31404-683b-4ac8-9d85-7b5425843395\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.008847 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.025432 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.047741 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.071586 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.073281 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.080388 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.081042 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.108200 4870 scope.go:117] "RemoveContainer" containerID="6f55430c41e8ead7753507a2512b5b48834fc279113087829e2879e0568d63e9" Feb 16 17:23:10 crc kubenswrapper[4870]: E0216 17:23:10.108908 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f55430c41e8ead7753507a2512b5b48834fc279113087829e2879e0568d63e9\": container with ID starting with 6f55430c41e8ead7753507a2512b5b48834fc279113087829e2879e0568d63e9 not found: ID does not exist" containerID="6f55430c41e8ead7753507a2512b5b48834fc279113087829e2879e0568d63e9" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.108984 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f55430c41e8ead7753507a2512b5b48834fc279113087829e2879e0568d63e9"} err="failed to get container status \"6f55430c41e8ead7753507a2512b5b48834fc279113087829e2879e0568d63e9\": rpc error: code = NotFound desc = could not find container \"6f55430c41e8ead7753507a2512b5b48834fc279113087829e2879e0568d63e9\": container with ID starting with 6f55430c41e8ead7753507a2512b5b48834fc279113087829e2879e0568d63e9 not found: ID does not exist" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.109010 4870 scope.go:117] "RemoveContainer" containerID="ddb65b25b83ecbfc8ca33223e1ea38d2e04c9df0e4f0bd5115dccc17e5c39e5a" Feb 16 17:23:10 crc kubenswrapper[4870]: E0216 17:23:10.109361 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddb65b25b83ecbfc8ca33223e1ea38d2e04c9df0e4f0bd5115dccc17e5c39e5a\": container with ID starting with ddb65b25b83ecbfc8ca33223e1ea38d2e04c9df0e4f0bd5115dccc17e5c39e5a not found: ID does not exist" containerID="ddb65b25b83ecbfc8ca33223e1ea38d2e04c9df0e4f0bd5115dccc17e5c39e5a" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.109411 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddb65b25b83ecbfc8ca33223e1ea38d2e04c9df0e4f0bd5115dccc17e5c39e5a"} err="failed to get container status \"ddb65b25b83ecbfc8ca33223e1ea38d2e04c9df0e4f0bd5115dccc17e5c39e5a\": rpc error: code = NotFound desc = could not find container \"ddb65b25b83ecbfc8ca33223e1ea38d2e04c9df0e4f0bd5115dccc17e5c39e5a\": container with ID starting with ddb65b25b83ecbfc8ca33223e1ea38d2e04c9df0e4f0bd5115dccc17e5c39e5a not found: ID does not exist" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.109441 4870 scope.go:117] "RemoveContainer" containerID="6f55430c41e8ead7753507a2512b5b48834fc279113087829e2879e0568d63e9" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.109832 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f55430c41e8ead7753507a2512b5b48834fc279113087829e2879e0568d63e9"} err="failed to get container status \"6f55430c41e8ead7753507a2512b5b48834fc279113087829e2879e0568d63e9\": rpc error: code = NotFound desc = could not find container \"6f55430c41e8ead7753507a2512b5b48834fc279113087829e2879e0568d63e9\": container with ID starting with 6f55430c41e8ead7753507a2512b5b48834fc279113087829e2879e0568d63e9 not found: ID does not exist" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.109877 4870 scope.go:117] "RemoveContainer" containerID="ddb65b25b83ecbfc8ca33223e1ea38d2e04c9df0e4f0bd5115dccc17e5c39e5a" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.111002 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddb65b25b83ecbfc8ca33223e1ea38d2e04c9df0e4f0bd5115dccc17e5c39e5a"} err="failed to get container status \"ddb65b25b83ecbfc8ca33223e1ea38d2e04c9df0e4f0bd5115dccc17e5c39e5a\": rpc error: code = NotFound desc = could not find container \"ddb65b25b83ecbfc8ca33223e1ea38d2e04c9df0e4f0bd5115dccc17e5c39e5a\": container with ID starting with ddb65b25b83ecbfc8ca33223e1ea38d2e04c9df0e4f0bd5115dccc17e5c39e5a not found: ID does not exist" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.111058 4870 scope.go:117] "RemoveContainer" containerID="e33da650737dea9303ca3d6a36810621f15b36aa7413a146a05365d2973730fc" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.115196 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ec8d81-47e5-4aa5-b28d-cc3c69114886-config-data\") pod \"nova-metadata-0\" (UID: \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\") " pod="openstack/nova-metadata-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.115387 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ec8d81-47e5-4aa5-b28d-cc3c69114886-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\") " pod="openstack/nova-metadata-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.115517 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzkrt\" (UniqueName: \"kubernetes.io/projected/56ec8d81-47e5-4aa5-b28d-cc3c69114886-kube-api-access-bzkrt\") pod \"nova-metadata-0\" (UID: \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\") " pod="openstack/nova-metadata-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.115748 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ec8d81-47e5-4aa5-b28d-cc3c69114886-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\") " pod="openstack/nova-metadata-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.116085 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d31404-683b-4ac8-9d85-7b5425843395-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"11d31404-683b-4ac8-9d85-7b5425843395\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.116333 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ec8d81-47e5-4aa5-b28d-cc3c69114886-logs\") pod \"nova-metadata-0\" (UID: \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\") " pod="openstack/nova-metadata-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.116580 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pqjx\" (UniqueName: \"kubernetes.io/projected/11d31404-683b-4ac8-9d85-7b5425843395-kube-api-access-7pqjx\") pod \"nova-cell1-conductor-0\" (UID: \"11d31404-683b-4ac8-9d85-7b5425843395\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.116700 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11d31404-683b-4ac8-9d85-7b5425843395-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"11d31404-683b-4ac8-9d85-7b5425843395\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.142473 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pqjx\" (UniqueName: \"kubernetes.io/projected/11d31404-683b-4ac8-9d85-7b5425843395-kube-api-access-7pqjx\") pod \"nova-cell1-conductor-0\" (UID: \"11d31404-683b-4ac8-9d85-7b5425843395\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.149530 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11d31404-683b-4ac8-9d85-7b5425843395-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"11d31404-683b-4ac8-9d85-7b5425843395\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.154013 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.158276 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d31404-683b-4ac8-9d85-7b5425843395-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"11d31404-683b-4ac8-9d85-7b5425843395\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.176022 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.178865 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.182684 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.182917 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.189842 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.220993 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ec8d81-47e5-4aa5-b28d-cc3c69114886-config-data\") pod \"nova-metadata-0\" (UID: \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\") " pod="openstack/nova-metadata-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.221058 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ec8d81-47e5-4aa5-b28d-cc3c69114886-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\") " pod="openstack/nova-metadata-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.221085 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzkrt\" (UniqueName: \"kubernetes.io/projected/56ec8d81-47e5-4aa5-b28d-cc3c69114886-kube-api-access-bzkrt\") pod \"nova-metadata-0\" (UID: \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\") " pod="openstack/nova-metadata-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.221132 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6df8d5ad-619e-4953-9e10-ac1c43c20e3e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6df8d5ad-619e-4953-9e10-ac1c43c20e3e\") " pod="openstack/kube-state-metrics-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.221163 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m4bt\" (UniqueName: \"kubernetes.io/projected/6df8d5ad-619e-4953-9e10-ac1c43c20e3e-kube-api-access-4m4bt\") pod \"kube-state-metrics-0\" (UID: \"6df8d5ad-619e-4953-9e10-ac1c43c20e3e\") " pod="openstack/kube-state-metrics-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.221187 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df8d5ad-619e-4953-9e10-ac1c43c20e3e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6df8d5ad-619e-4953-9e10-ac1c43c20e3e\") " pod="openstack/kube-state-metrics-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.221208 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ec8d81-47e5-4aa5-b28d-cc3c69114886-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\") " pod="openstack/nova-metadata-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.221311 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ec8d81-47e5-4aa5-b28d-cc3c69114886-logs\") pod \"nova-metadata-0\" (UID: \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\") " pod="openstack/nova-metadata-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.221347 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6df8d5ad-619e-4953-9e10-ac1c43c20e3e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6df8d5ad-619e-4953-9e10-ac1c43c20e3e\") " pod="openstack/kube-state-metrics-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.232800 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ec8d81-47e5-4aa5-b28d-cc3c69114886-logs\") pod \"nova-metadata-0\" (UID: \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\") " pod="openstack/nova-metadata-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.243782 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ec8d81-47e5-4aa5-b28d-cc3c69114886-config-data\") pod \"nova-metadata-0\" (UID: \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\") " pod="openstack/nova-metadata-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.246510 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ec8d81-47e5-4aa5-b28d-cc3c69114886-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\") " pod="openstack/nova-metadata-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.261746 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzkrt\" (UniqueName: \"kubernetes.io/projected/56ec8d81-47e5-4aa5-b28d-cc3c69114886-kube-api-access-bzkrt\") pod \"nova-metadata-0\" (UID: \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\") " pod="openstack/nova-metadata-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.270335 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ec8d81-47e5-4aa5-b28d-cc3c69114886-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\") " pod="openstack/nova-metadata-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.305295 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.322220 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="086322f7-5554-4a10-a1be-10622174e27f" path="/var/lib/kubelet/pods/086322f7-5554-4a10-a1be-10622174e27f/volumes" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.322916 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4a26481-2689-4f98-8254-5d6be5f17130" path="/var/lib/kubelet/pods/e4a26481-2689-4f98-8254-5d6be5f17130/volumes" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.327661 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6df8d5ad-619e-4953-9e10-ac1c43c20e3e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6df8d5ad-619e-4953-9e10-ac1c43c20e3e\") " pod="openstack/kube-state-metrics-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.327713 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m4bt\" (UniqueName: \"kubernetes.io/projected/6df8d5ad-619e-4953-9e10-ac1c43c20e3e-kube-api-access-4m4bt\") pod \"kube-state-metrics-0\" (UID: \"6df8d5ad-619e-4953-9e10-ac1c43c20e3e\") " pod="openstack/kube-state-metrics-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.327751 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df8d5ad-619e-4953-9e10-ac1c43c20e3e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6df8d5ad-619e-4953-9e10-ac1c43c20e3e\") " pod="openstack/kube-state-metrics-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.327930 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6df8d5ad-619e-4953-9e10-ac1c43c20e3e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6df8d5ad-619e-4953-9e10-ac1c43c20e3e\") " pod="openstack/kube-state-metrics-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.342011 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df8d5ad-619e-4953-9e10-ac1c43c20e3e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6df8d5ad-619e-4953-9e10-ac1c43c20e3e\") " pod="openstack/kube-state-metrics-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.342562 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6df8d5ad-619e-4953-9e10-ac1c43c20e3e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6df8d5ad-619e-4953-9e10-ac1c43c20e3e\") " pod="openstack/kube-state-metrics-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.345121 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6df8d5ad-619e-4953-9e10-ac1c43c20e3e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6df8d5ad-619e-4953-9e10-ac1c43c20e3e\") " pod="openstack/kube-state-metrics-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.371925 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m4bt\" (UniqueName: \"kubernetes.io/projected/6df8d5ad-619e-4953-9e10-ac1c43c20e3e-kube-api-access-4m4bt\") pod \"kube-state-metrics-0\" (UID: \"6df8d5ad-619e-4953-9e10-ac1c43c20e3e\") " pod="openstack/kube-state-metrics-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.409547 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.504883 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.536730 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlfn6\" (UniqueName: \"kubernetes.io/projected/9fe113d9-0c75-4ccf-8615-64d80312db3b-kube-api-access-jlfn6\") pod \"9fe113d9-0c75-4ccf-8615-64d80312db3b\" (UID: \"9fe113d9-0c75-4ccf-8615-64d80312db3b\") " Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.536790 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fe113d9-0c75-4ccf-8615-64d80312db3b-combined-ca-bundle\") pod \"9fe113d9-0c75-4ccf-8615-64d80312db3b\" (UID: \"9fe113d9-0c75-4ccf-8615-64d80312db3b\") " Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.536875 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe113d9-0c75-4ccf-8615-64d80312db3b-config-data\") pod \"9fe113d9-0c75-4ccf-8615-64d80312db3b\" (UID: \"9fe113d9-0c75-4ccf-8615-64d80312db3b\") " Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.564938 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fe113d9-0c75-4ccf-8615-64d80312db3b-kube-api-access-jlfn6" (OuterVolumeSpecName: "kube-api-access-jlfn6") pod "9fe113d9-0c75-4ccf-8615-64d80312db3b" (UID: "9fe113d9-0c75-4ccf-8615-64d80312db3b"). InnerVolumeSpecName "kube-api-access-jlfn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.609181 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.632533 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fe113d9-0c75-4ccf-8615-64d80312db3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fe113d9-0c75-4ccf-8615-64d80312db3b" (UID: "9fe113d9-0c75-4ccf-8615-64d80312db3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.675350 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fe113d9-0c75-4ccf-8615-64d80312db3b-config-data" (OuterVolumeSpecName: "config-data") pod "9fe113d9-0c75-4ccf-8615-64d80312db3b" (UID: "9fe113d9-0c75-4ccf-8615-64d80312db3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.676463 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlfn6\" (UniqueName: \"kubernetes.io/projected/9fe113d9-0c75-4ccf-8615-64d80312db3b-kube-api-access-jlfn6\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.676507 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fe113d9-0c75-4ccf-8615-64d80312db3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.676520 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe113d9-0c75-4ccf-8615-64d80312db3b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.970209 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9fe113d9-0c75-4ccf-8615-64d80312db3b","Type":"ContainerDied","Data":"8a62f37dccd64995bc6612b9bd434d771f690ee7145f0f7d45c8cce869ec46e5"} Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.970260 4870 scope.go:117] "RemoveContainer" containerID="4797d9051761e832b095227d545456cee6d2ee1335227a3fea177ab59e9a0813" Feb 16 17:23:10 crc kubenswrapper[4870]: I0216 17:23:10.970267 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.014065 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.039819 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.051439 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:23:11 crc kubenswrapper[4870]: E0216 17:23:11.051872 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fe113d9-0c75-4ccf-8615-64d80312db3b" containerName="nova-scheduler-scheduler" Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.051885 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fe113d9-0c75-4ccf-8615-64d80312db3b" containerName="nova-scheduler-scheduler" Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.052138 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fe113d9-0c75-4ccf-8615-64d80312db3b" containerName="nova-scheduler-scheduler" Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.052833 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.056299 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.060964 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.109401 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 17:23:11 crc kubenswrapper[4870]: W0216 17:23:11.117345 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11d31404_683b_4ac8_9d85_7b5425843395.slice/crio-fd39a9a47a41713229a632e0f217091d8d728d330dd662be287bacf6b0ffe025 WatchSource:0}: Error finding container fd39a9a47a41713229a632e0f217091d8d728d330dd662be287bacf6b0ffe025: Status 404 returned error can't find the container with id fd39a9a47a41713229a632e0f217091d8d728d330dd662be287bacf6b0ffe025 Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.188209 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a114e86f-d5ac-47dd-badc-7b42f764964e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a114e86f-d5ac-47dd-badc-7b42f764964e\") " pod="openstack/nova-scheduler-0" Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.188725 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a114e86f-d5ac-47dd-badc-7b42f764964e-config-data\") pod \"nova-scheduler-0\" (UID: \"a114e86f-d5ac-47dd-badc-7b42f764964e\") " pod="openstack/nova-scheduler-0" Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.188771 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hr4c\" (UniqueName: \"kubernetes.io/projected/a114e86f-d5ac-47dd-badc-7b42f764964e-kube-api-access-2hr4c\") pod \"nova-scheduler-0\" (UID: \"a114e86f-d5ac-47dd-badc-7b42f764964e\") " pod="openstack/nova-scheduler-0" Feb 16 17:23:11 crc kubenswrapper[4870]: E0216 17:23:11.226096 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.250987 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.291775 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a114e86f-d5ac-47dd-badc-7b42f764964e-config-data\") pod \"nova-scheduler-0\" (UID: \"a114e86f-d5ac-47dd-badc-7b42f764964e\") " pod="openstack/nova-scheduler-0" Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.291902 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hr4c\" (UniqueName: \"kubernetes.io/projected/a114e86f-d5ac-47dd-badc-7b42f764964e-kube-api-access-2hr4c\") pod \"nova-scheduler-0\" (UID: \"a114e86f-d5ac-47dd-badc-7b42f764964e\") " pod="openstack/nova-scheduler-0" Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.292013 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a114e86f-d5ac-47dd-badc-7b42f764964e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a114e86f-d5ac-47dd-badc-7b42f764964e\") " pod="openstack/nova-scheduler-0" Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.311878 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a114e86f-d5ac-47dd-badc-7b42f764964e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a114e86f-d5ac-47dd-badc-7b42f764964e\") " pod="openstack/nova-scheduler-0" Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.312870 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hr4c\" (UniqueName: \"kubernetes.io/projected/a114e86f-d5ac-47dd-badc-7b42f764964e-kube-api-access-2hr4c\") pod \"nova-scheduler-0\" (UID: \"a114e86f-d5ac-47dd-badc-7b42f764964e\") " pod="openstack/nova-scheduler-0" Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.317590 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a114e86f-d5ac-47dd-badc-7b42f764964e-config-data\") pod \"nova-scheduler-0\" (UID: \"a114e86f-d5ac-47dd-badc-7b42f764964e\") " pod="openstack/nova-scheduler-0" Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.340073 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.381641 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.649574 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.650072 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4ce694d3-c25b-4874-b4fe-ac2d3df6823c" containerName="ceilometer-central-agent" containerID="cri-o://72a5f4f9a0c505bf057c114cad802e1d386eda8f9f70036461a9a0d6c5909991" gracePeriod=30 Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.650508 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4ce694d3-c25b-4874-b4fe-ac2d3df6823c" containerName="proxy-httpd" containerID="cri-o://14a4f534b28c3f00c16fb1440ecacf6c9b61b2d59b425f655a4be3ac0224b02f" gracePeriod=30 Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.650558 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4ce694d3-c25b-4874-b4fe-ac2d3df6823c" containerName="sg-core" containerID="cri-o://320c7eaeb32e313405a02e9d1abd551d557608766920bf3fee6b2fcf35db7cce" gracePeriod=30 Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.650593 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4ce694d3-c25b-4874-b4fe-ac2d3df6823c" containerName="ceilometer-notification-agent" containerID="cri-o://89fb7221dbe78129bb53078f7616360cbaf9865d9ef2025c23a28bbe4963f09c" gracePeriod=30 Feb 16 17:23:11 crc kubenswrapper[4870]: I0216 17:23:11.856498 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:23:12 crc kubenswrapper[4870]: I0216 17:23:12.002590 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"11d31404-683b-4ac8-9d85-7b5425843395","Type":"ContainerStarted","Data":"4360f0997398febeb7dd01a54cab1bb3fdbcdb19eb3ec67ff55aeb5a073a11e1"} Feb 16 17:23:12 crc kubenswrapper[4870]: I0216 17:23:12.002980 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"11d31404-683b-4ac8-9d85-7b5425843395","Type":"ContainerStarted","Data":"fd39a9a47a41713229a632e0f217091d8d728d330dd662be287bacf6b0ffe025"} Feb 16 17:23:12 crc kubenswrapper[4870]: I0216 17:23:12.004264 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 16 17:23:12 crc kubenswrapper[4870]: I0216 17:23:12.012198 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"56ec8d81-47e5-4aa5-b28d-cc3c69114886","Type":"ContainerStarted","Data":"a7d2dc329d3c0674ae9606b4399bee2a0292c023b1f6b1bd0ad8244f8b403143"} Feb 16 17:23:12 crc kubenswrapper[4870]: I0216 17:23:12.012242 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"56ec8d81-47e5-4aa5-b28d-cc3c69114886","Type":"ContainerStarted","Data":"65b3fa2dc9943e461b89652edd2975c6700d29475ac4f4a365031a9dc0778ee2"} Feb 16 17:23:12 crc kubenswrapper[4870]: I0216 17:23:12.012254 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"56ec8d81-47e5-4aa5-b28d-cc3c69114886","Type":"ContainerStarted","Data":"c8470d0208279950208e94caf3a5c5c729c683d6c287cab9519e8736c32c14c5"} Feb 16 17:23:12 crc kubenswrapper[4870]: I0216 17:23:12.016982 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a114e86f-d5ac-47dd-badc-7b42f764964e","Type":"ContainerStarted","Data":"b744b61c17935a2f23d0f0b577f6f60fd1c255909698eebb784a72433ba62263"} Feb 16 17:23:12 crc kubenswrapper[4870]: I0216 17:23:12.023685 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6df8d5ad-619e-4953-9e10-ac1c43c20e3e","Type":"ContainerStarted","Data":"75fa68d61cdd0cf5af40ee3f7f5ef8501b646aca2b25b6a186087699d56b60e0"} Feb 16 17:23:12 crc kubenswrapper[4870]: I0216 17:23:12.038485 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.03846368 podStartE2EDuration="3.03846368s" podCreationTimestamp="2026-02-16 17:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:12.036075562 +0000 UTC m=+1396.519539946" watchObservedRunningTime="2026-02-16 17:23:12.03846368 +0000 UTC m=+1396.521928054" Feb 16 17:23:12 crc kubenswrapper[4870]: I0216 17:23:12.041134 4870 generic.go:334] "Generic (PLEG): container finished" podID="4ce694d3-c25b-4874-b4fe-ac2d3df6823c" containerID="14a4f534b28c3f00c16fb1440ecacf6c9b61b2d59b425f655a4be3ac0224b02f" exitCode=0 Feb 16 17:23:12 crc kubenswrapper[4870]: I0216 17:23:12.041162 4870 generic.go:334] "Generic (PLEG): container finished" podID="4ce694d3-c25b-4874-b4fe-ac2d3df6823c" containerID="320c7eaeb32e313405a02e9d1abd551d557608766920bf3fee6b2fcf35db7cce" exitCode=2 Feb 16 17:23:12 crc kubenswrapper[4870]: I0216 17:23:12.041210 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ce694d3-c25b-4874-b4fe-ac2d3df6823c","Type":"ContainerDied","Data":"14a4f534b28c3f00c16fb1440ecacf6c9b61b2d59b425f655a4be3ac0224b02f"} Feb 16 17:23:12 crc kubenswrapper[4870]: I0216 17:23:12.041236 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ce694d3-c25b-4874-b4fe-ac2d3df6823c","Type":"ContainerDied","Data":"320c7eaeb32e313405a02e9d1abd551d557608766920bf3fee6b2fcf35db7cce"} Feb 16 17:23:12 crc kubenswrapper[4870]: I0216 17:23:12.062519 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.062451704 podStartE2EDuration="3.062451704s" podCreationTimestamp="2026-02-16 17:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:12.05704834 +0000 UTC m=+1396.540512744" watchObservedRunningTime="2026-02-16 17:23:12.062451704 +0000 UTC m=+1396.545916078" Feb 16 17:23:12 crc kubenswrapper[4870]: I0216 17:23:12.262719 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fe113d9-0c75-4ccf-8615-64d80312db3b" path="/var/lib/kubelet/pods/9fe113d9-0c75-4ccf-8615-64d80312db3b/volumes" Feb 16 17:23:13 crc kubenswrapper[4870]: I0216 17:23:13.059296 4870 generic.go:334] "Generic (PLEG): container finished" podID="4ce694d3-c25b-4874-b4fe-ac2d3df6823c" containerID="72a5f4f9a0c505bf057c114cad802e1d386eda8f9f70036461a9a0d6c5909991" exitCode=0 Feb 16 17:23:13 crc kubenswrapper[4870]: I0216 17:23:13.059603 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ce694d3-c25b-4874-b4fe-ac2d3df6823c","Type":"ContainerDied","Data":"72a5f4f9a0c505bf057c114cad802e1d386eda8f9f70036461a9a0d6c5909991"} Feb 16 17:23:13 crc kubenswrapper[4870]: I0216 17:23:13.062605 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a114e86f-d5ac-47dd-badc-7b42f764964e","Type":"ContainerStarted","Data":"f668687b87833b2bd6f6e583733d8e7a103d8ad5c064b7577db1329579372f00"} Feb 16 17:23:13 crc kubenswrapper[4870]: I0216 17:23:13.067506 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6df8d5ad-619e-4953-9e10-ac1c43c20e3e","Type":"ContainerStarted","Data":"70495b23848158cb22d19ea0e59bcbffc864fd975defe40b7363772029b60929"} Feb 16 17:23:13 crc kubenswrapper[4870]: I0216 17:23:13.067548 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 17:23:13 crc kubenswrapper[4870]: I0216 17:23:13.090344 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.090319091 podStartE2EDuration="2.090319091s" podCreationTimestamp="2026-02-16 17:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:13.08151359 +0000 UTC m=+1397.564977984" watchObservedRunningTime="2026-02-16 17:23:13.090319091 +0000 UTC m=+1397.573783505" Feb 16 17:23:13 crc kubenswrapper[4870]: I0216 17:23:13.113976 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.630280767 podStartE2EDuration="4.113929085s" podCreationTimestamp="2026-02-16 17:23:09 +0000 UTC" firstStartedPulling="2026-02-16 17:23:11.354039853 +0000 UTC m=+1395.837504237" lastFinishedPulling="2026-02-16 17:23:11.837688181 +0000 UTC m=+1396.321152555" observedRunningTime="2026-02-16 17:23:13.103205799 +0000 UTC m=+1397.586670193" watchObservedRunningTime="2026-02-16 17:23:13.113929085 +0000 UTC m=+1397.597393469" Feb 16 17:23:14 crc kubenswrapper[4870]: I0216 17:23:14.922576 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.073401 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-config-data\") pod \"de5b219a-9004-4f4d-8a8a-cd03eae15d3d\" (UID: \"de5b219a-9004-4f4d-8a8a-cd03eae15d3d\") " Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.073463 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-combined-ca-bundle\") pod \"de5b219a-9004-4f4d-8a8a-cd03eae15d3d\" (UID: \"de5b219a-9004-4f4d-8a8a-cd03eae15d3d\") " Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.073603 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j6ll\" (UniqueName: \"kubernetes.io/projected/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-kube-api-access-8j6ll\") pod \"de5b219a-9004-4f4d-8a8a-cd03eae15d3d\" (UID: \"de5b219a-9004-4f4d-8a8a-cd03eae15d3d\") " Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.073633 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-logs\") pod \"de5b219a-9004-4f4d-8a8a-cd03eae15d3d\" (UID: \"de5b219a-9004-4f4d-8a8a-cd03eae15d3d\") " Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.074297 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-logs" (OuterVolumeSpecName: "logs") pod "de5b219a-9004-4f4d-8a8a-cd03eae15d3d" (UID: "de5b219a-9004-4f4d-8a8a-cd03eae15d3d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.082211 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-kube-api-access-8j6ll" (OuterVolumeSpecName: "kube-api-access-8j6ll") pod "de5b219a-9004-4f4d-8a8a-cd03eae15d3d" (UID: "de5b219a-9004-4f4d-8a8a-cd03eae15d3d"). InnerVolumeSpecName "kube-api-access-8j6ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.088523 4870 generic.go:334] "Generic (PLEG): container finished" podID="de5b219a-9004-4f4d-8a8a-cd03eae15d3d" containerID="6b755386ab3f66d3745906b3e2817450762a6f5ecc08c159db945d6a27324f74" exitCode=0 Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.089078 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de5b219a-9004-4f4d-8a8a-cd03eae15d3d","Type":"ContainerDied","Data":"6b755386ab3f66d3745906b3e2817450762a6f5ecc08c159db945d6a27324f74"} Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.089140 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de5b219a-9004-4f4d-8a8a-cd03eae15d3d","Type":"ContainerDied","Data":"acd8b86b449128bf82081624b003d7a9fa3291454d379bf1f29dbc2a2c2562ff"} Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.089172 4870 scope.go:117] "RemoveContainer" containerID="6b755386ab3f66d3745906b3e2817450762a6f5ecc08c159db945d6a27324f74" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.089110 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.106602 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-config-data" (OuterVolumeSpecName: "config-data") pod "de5b219a-9004-4f4d-8a8a-cd03eae15d3d" (UID: "de5b219a-9004-4f4d-8a8a-cd03eae15d3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.110344 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de5b219a-9004-4f4d-8a8a-cd03eae15d3d" (UID: "de5b219a-9004-4f4d-8a8a-cd03eae15d3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.176431 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.176470 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.176482 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j6ll\" (UniqueName: \"kubernetes.io/projected/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-kube-api-access-8j6ll\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.176491 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de5b219a-9004-4f4d-8a8a-cd03eae15d3d-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.184142 4870 scope.go:117] "RemoveContainer" containerID="901bf00a165d78e88a173a053b72e7523b03f077d4a65a667f60f559566791df" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.205217 4870 scope.go:117] "RemoveContainer" containerID="6b755386ab3f66d3745906b3e2817450762a6f5ecc08c159db945d6a27324f74" Feb 16 17:23:15 crc kubenswrapper[4870]: E0216 17:23:15.205644 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b755386ab3f66d3745906b3e2817450762a6f5ecc08c159db945d6a27324f74\": container with ID starting with 6b755386ab3f66d3745906b3e2817450762a6f5ecc08c159db945d6a27324f74 not found: ID does not exist" containerID="6b755386ab3f66d3745906b3e2817450762a6f5ecc08c159db945d6a27324f74" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.205683 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b755386ab3f66d3745906b3e2817450762a6f5ecc08c159db945d6a27324f74"} err="failed to get container status \"6b755386ab3f66d3745906b3e2817450762a6f5ecc08c159db945d6a27324f74\": rpc error: code = NotFound desc = could not find container \"6b755386ab3f66d3745906b3e2817450762a6f5ecc08c159db945d6a27324f74\": container with ID starting with 6b755386ab3f66d3745906b3e2817450762a6f5ecc08c159db945d6a27324f74 not found: ID does not exist" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.205709 4870 scope.go:117] "RemoveContainer" containerID="901bf00a165d78e88a173a053b72e7523b03f077d4a65a667f60f559566791df" Feb 16 17:23:15 crc kubenswrapper[4870]: E0216 17:23:15.205932 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"901bf00a165d78e88a173a053b72e7523b03f077d4a65a667f60f559566791df\": container with ID starting with 901bf00a165d78e88a173a053b72e7523b03f077d4a65a667f60f559566791df not found: ID does not exist" containerID="901bf00a165d78e88a173a053b72e7523b03f077d4a65a667f60f559566791df" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.205983 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"901bf00a165d78e88a173a053b72e7523b03f077d4a65a667f60f559566791df"} err="failed to get container status \"901bf00a165d78e88a173a053b72e7523b03f077d4a65a667f60f559566791df\": rpc error: code = NotFound desc = could not find container \"901bf00a165d78e88a173a053b72e7523b03f077d4a65a667f60f559566791df\": container with ID starting with 901bf00a165d78e88a173a053b72e7523b03f077d4a65a667f60f559566791df not found: ID does not exist" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.410327 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.411850 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.421316 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.435219 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.446747 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 17:23:15 crc kubenswrapper[4870]: E0216 17:23:15.447415 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de5b219a-9004-4f4d-8a8a-cd03eae15d3d" containerName="nova-api-api" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.447439 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="de5b219a-9004-4f4d-8a8a-cd03eae15d3d" containerName="nova-api-api" Feb 16 17:23:15 crc kubenswrapper[4870]: E0216 17:23:15.447475 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de5b219a-9004-4f4d-8a8a-cd03eae15d3d" containerName="nova-api-log" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.447483 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="de5b219a-9004-4f4d-8a8a-cd03eae15d3d" containerName="nova-api-log" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.447737 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="de5b219a-9004-4f4d-8a8a-cd03eae15d3d" containerName="nova-api-log" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.447776 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="de5b219a-9004-4f4d-8a8a-cd03eae15d3d" containerName="nova-api-api" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.449192 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.452924 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.458915 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.583566 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbrgf\" (UniqueName: \"kubernetes.io/projected/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-kube-api-access-zbrgf\") pod \"nova-api-0\" (UID: \"9e5dae4f-e80d-4c69-9675-591edcf7d6dc\") " pod="openstack/nova-api-0" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.583658 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-config-data\") pod \"nova-api-0\" (UID: \"9e5dae4f-e80d-4c69-9675-591edcf7d6dc\") " pod="openstack/nova-api-0" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.583737 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-logs\") pod \"nova-api-0\" (UID: \"9e5dae4f-e80d-4c69-9675-591edcf7d6dc\") " pod="openstack/nova-api-0" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.583837 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9e5dae4f-e80d-4c69-9675-591edcf7d6dc\") " pod="openstack/nova-api-0" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.686507 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbrgf\" (UniqueName: \"kubernetes.io/projected/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-kube-api-access-zbrgf\") pod \"nova-api-0\" (UID: \"9e5dae4f-e80d-4c69-9675-591edcf7d6dc\") " pod="openstack/nova-api-0" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.686578 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-config-data\") pod \"nova-api-0\" (UID: \"9e5dae4f-e80d-4c69-9675-591edcf7d6dc\") " pod="openstack/nova-api-0" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.686639 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-logs\") pod \"nova-api-0\" (UID: \"9e5dae4f-e80d-4c69-9675-591edcf7d6dc\") " pod="openstack/nova-api-0" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.686658 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9e5dae4f-e80d-4c69-9675-591edcf7d6dc\") " pod="openstack/nova-api-0" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.687542 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-logs\") pod \"nova-api-0\" (UID: \"9e5dae4f-e80d-4c69-9675-591edcf7d6dc\") " pod="openstack/nova-api-0" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.691985 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-config-data\") pod \"nova-api-0\" (UID: \"9e5dae4f-e80d-4c69-9675-591edcf7d6dc\") " pod="openstack/nova-api-0" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.701138 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9e5dae4f-e80d-4c69-9675-591edcf7d6dc\") " pod="openstack/nova-api-0" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.707039 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbrgf\" (UniqueName: \"kubernetes.io/projected/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-kube-api-access-zbrgf\") pod \"nova-api-0\" (UID: \"9e5dae4f-e80d-4c69-9675-591edcf7d6dc\") " pod="openstack/nova-api-0" Feb 16 17:23:15 crc kubenswrapper[4870]: I0216 17:23:15.785123 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:23:16 crc kubenswrapper[4870]: I0216 17:23:16.233455 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de5b219a-9004-4f4d-8a8a-cd03eae15d3d" path="/var/lib/kubelet/pods/de5b219a-9004-4f4d-8a8a-cd03eae15d3d/volumes" Feb 16 17:23:16 crc kubenswrapper[4870]: I0216 17:23:16.278908 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:23:16 crc kubenswrapper[4870]: I0216 17:23:16.381868 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 17:23:17 crc kubenswrapper[4870]: I0216 17:23:17.111782 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e5dae4f-e80d-4c69-9675-591edcf7d6dc","Type":"ContainerStarted","Data":"0548fbc3c18ab4c00a510dd3114c5f0e1f26c280bacc8ff56f77fd1f79898395"} Feb 16 17:23:17 crc kubenswrapper[4870]: I0216 17:23:17.111838 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e5dae4f-e80d-4c69-9675-591edcf7d6dc","Type":"ContainerStarted","Data":"0b021b521b5e54b85f3972421f71460da3f0face9c30c89b740c3b4e1ce80fa3"} Feb 16 17:23:17 crc kubenswrapper[4870]: I0216 17:23:17.111851 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e5dae4f-e80d-4c69-9675-591edcf7d6dc","Type":"ContainerStarted","Data":"d65a73a762aebba2e028fa5666892684c50d1acb9f8df83b0ceb75d62433f04b"} Feb 16 17:23:17 crc kubenswrapper[4870]: I0216 17:23:17.137991 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.137966396 podStartE2EDuration="2.137966396s" podCreationTimestamp="2026-02-16 17:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:17.131858422 +0000 UTC m=+1401.615322816" watchObservedRunningTime="2026-02-16 17:23:17.137966396 +0000 UTC m=+1401.621430780" Feb 16 17:23:17 crc kubenswrapper[4870]: I0216 17:23:17.858990 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:23:17 crc kubenswrapper[4870]: I0216 17:23:17.961307 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-run-httpd\") pod \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " Feb 16 17:23:17 crc kubenswrapper[4870]: I0216 17:23:17.961655 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-log-httpd\") pod \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " Feb 16 17:23:17 crc kubenswrapper[4870]: I0216 17:23:17.961688 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-scripts\") pod \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " Feb 16 17:23:17 crc kubenswrapper[4870]: I0216 17:23:17.961897 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2r9t\" (UniqueName: \"kubernetes.io/projected/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-kube-api-access-q2r9t\") pod \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " Feb 16 17:23:17 crc kubenswrapper[4870]: I0216 17:23:17.962077 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-combined-ca-bundle\") pod \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " Feb 16 17:23:17 crc kubenswrapper[4870]: I0216 17:23:17.962087 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4ce694d3-c25b-4874-b4fe-ac2d3df6823c" (UID: "4ce694d3-c25b-4874-b4fe-ac2d3df6823c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:17 crc kubenswrapper[4870]: I0216 17:23:17.962120 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-sg-core-conf-yaml\") pod \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " Feb 16 17:23:17 crc kubenswrapper[4870]: I0216 17:23:17.962180 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4ce694d3-c25b-4874-b4fe-ac2d3df6823c" (UID: "4ce694d3-c25b-4874-b4fe-ac2d3df6823c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:17 crc kubenswrapper[4870]: I0216 17:23:17.962280 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-config-data\") pod \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\" (UID: \"4ce694d3-c25b-4874-b4fe-ac2d3df6823c\") " Feb 16 17:23:17 crc kubenswrapper[4870]: I0216 17:23:17.963596 4870 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:17 crc kubenswrapper[4870]: I0216 17:23:17.963627 4870 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:17 crc kubenswrapper[4870]: I0216 17:23:17.970169 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-scripts" (OuterVolumeSpecName: "scripts") pod "4ce694d3-c25b-4874-b4fe-ac2d3df6823c" (UID: "4ce694d3-c25b-4874-b4fe-ac2d3df6823c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:17 crc kubenswrapper[4870]: I0216 17:23:17.970296 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-kube-api-access-q2r9t" (OuterVolumeSpecName: "kube-api-access-q2r9t") pod "4ce694d3-c25b-4874-b4fe-ac2d3df6823c" (UID: "4ce694d3-c25b-4874-b4fe-ac2d3df6823c"). InnerVolumeSpecName "kube-api-access-q2r9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:17 crc kubenswrapper[4870]: I0216 17:23:17.995119 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4ce694d3-c25b-4874-b4fe-ac2d3df6823c" (UID: "4ce694d3-c25b-4874-b4fe-ac2d3df6823c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.065302 4870 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.065343 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.065356 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2r9t\" (UniqueName: \"kubernetes.io/projected/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-kube-api-access-q2r9t\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.070582 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ce694d3-c25b-4874-b4fe-ac2d3df6823c" (UID: "4ce694d3-c25b-4874-b4fe-ac2d3df6823c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.097844 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-config-data" (OuterVolumeSpecName: "config-data") pod "4ce694d3-c25b-4874-b4fe-ac2d3df6823c" (UID: "4ce694d3-c25b-4874-b4fe-ac2d3df6823c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.127272 4870 generic.go:334] "Generic (PLEG): container finished" podID="4ce694d3-c25b-4874-b4fe-ac2d3df6823c" containerID="89fb7221dbe78129bb53078f7616360cbaf9865d9ef2025c23a28bbe4963f09c" exitCode=0 Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.127330 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ce694d3-c25b-4874-b4fe-ac2d3df6823c","Type":"ContainerDied","Data":"89fb7221dbe78129bb53078f7616360cbaf9865d9ef2025c23a28bbe4963f09c"} Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.127390 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.127458 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ce694d3-c25b-4874-b4fe-ac2d3df6823c","Type":"ContainerDied","Data":"e57b9be97893daf09dc62f19254c9258e2c4303c7a24a765eefc1d5c6adc5d29"} Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.127493 4870 scope.go:117] "RemoveContainer" containerID="14a4f534b28c3f00c16fb1440ecacf6c9b61b2d59b425f655a4be3ac0224b02f" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.167051 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.167097 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ce694d3-c25b-4874-b4fe-ac2d3df6823c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.169267 4870 scope.go:117] "RemoveContainer" containerID="320c7eaeb32e313405a02e9d1abd551d557608766920bf3fee6b2fcf35db7cce" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.178136 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.190303 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.198065 4870 scope.go:117] "RemoveContainer" containerID="89fb7221dbe78129bb53078f7616360cbaf9865d9ef2025c23a28bbe4963f09c" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.206640 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:18 crc kubenswrapper[4870]: E0216 17:23:18.207263 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ce694d3-c25b-4874-b4fe-ac2d3df6823c" containerName="proxy-httpd" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.207291 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ce694d3-c25b-4874-b4fe-ac2d3df6823c" containerName="proxy-httpd" Feb 16 17:23:18 crc kubenswrapper[4870]: E0216 17:23:18.207319 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ce694d3-c25b-4874-b4fe-ac2d3df6823c" containerName="ceilometer-central-agent" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.207327 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ce694d3-c25b-4874-b4fe-ac2d3df6823c" containerName="ceilometer-central-agent" Feb 16 17:23:18 crc kubenswrapper[4870]: E0216 17:23:18.207348 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ce694d3-c25b-4874-b4fe-ac2d3df6823c" containerName="ceilometer-notification-agent" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.207355 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ce694d3-c25b-4874-b4fe-ac2d3df6823c" containerName="ceilometer-notification-agent" Feb 16 17:23:18 crc kubenswrapper[4870]: E0216 17:23:18.207371 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ce694d3-c25b-4874-b4fe-ac2d3df6823c" containerName="sg-core" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.207378 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ce694d3-c25b-4874-b4fe-ac2d3df6823c" containerName="sg-core" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.207619 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ce694d3-c25b-4874-b4fe-ac2d3df6823c" containerName="ceilometer-central-agent" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.207634 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ce694d3-c25b-4874-b4fe-ac2d3df6823c" containerName="sg-core" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.207649 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ce694d3-c25b-4874-b4fe-ac2d3df6823c" containerName="proxy-httpd" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.207661 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ce694d3-c25b-4874-b4fe-ac2d3df6823c" containerName="ceilometer-notification-agent" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.210289 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.216747 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.217031 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.217198 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.217395 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.248872 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ce694d3-c25b-4874-b4fe-ac2d3df6823c" path="/var/lib/kubelet/pods/4ce694d3-c25b-4874-b4fe-ac2d3df6823c/volumes" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.251244 4870 scope.go:117] "RemoveContainer" containerID="72a5f4f9a0c505bf057c114cad802e1d386eda8f9f70036461a9a0d6c5909991" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.269568 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-scripts\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.269665 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl7rj\" (UniqueName: \"kubernetes.io/projected/e3748c4b-9b03-4b60-92ec-62083cb70817-kube-api-access-kl7rj\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.269732 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3748c4b-9b03-4b60-92ec-62083cb70817-log-httpd\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.269822 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3748c4b-9b03-4b60-92ec-62083cb70817-run-httpd\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.269990 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.270119 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-config-data\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.270324 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.270358 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.275871 4870 scope.go:117] "RemoveContainer" containerID="14a4f534b28c3f00c16fb1440ecacf6c9b61b2d59b425f655a4be3ac0224b02f" Feb 16 17:23:18 crc kubenswrapper[4870]: E0216 17:23:18.276624 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14a4f534b28c3f00c16fb1440ecacf6c9b61b2d59b425f655a4be3ac0224b02f\": container with ID starting with 14a4f534b28c3f00c16fb1440ecacf6c9b61b2d59b425f655a4be3ac0224b02f not found: ID does not exist" containerID="14a4f534b28c3f00c16fb1440ecacf6c9b61b2d59b425f655a4be3ac0224b02f" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.276675 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14a4f534b28c3f00c16fb1440ecacf6c9b61b2d59b425f655a4be3ac0224b02f"} err="failed to get container status \"14a4f534b28c3f00c16fb1440ecacf6c9b61b2d59b425f655a4be3ac0224b02f\": rpc error: code = NotFound desc = could not find container \"14a4f534b28c3f00c16fb1440ecacf6c9b61b2d59b425f655a4be3ac0224b02f\": container with ID starting with 14a4f534b28c3f00c16fb1440ecacf6c9b61b2d59b425f655a4be3ac0224b02f not found: ID does not exist" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.276702 4870 scope.go:117] "RemoveContainer" containerID="320c7eaeb32e313405a02e9d1abd551d557608766920bf3fee6b2fcf35db7cce" Feb 16 17:23:18 crc kubenswrapper[4870]: E0216 17:23:18.277412 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"320c7eaeb32e313405a02e9d1abd551d557608766920bf3fee6b2fcf35db7cce\": container with ID starting with 320c7eaeb32e313405a02e9d1abd551d557608766920bf3fee6b2fcf35db7cce not found: ID does not exist" containerID="320c7eaeb32e313405a02e9d1abd551d557608766920bf3fee6b2fcf35db7cce" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.277448 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"320c7eaeb32e313405a02e9d1abd551d557608766920bf3fee6b2fcf35db7cce"} err="failed to get container status \"320c7eaeb32e313405a02e9d1abd551d557608766920bf3fee6b2fcf35db7cce\": rpc error: code = NotFound desc = could not find container \"320c7eaeb32e313405a02e9d1abd551d557608766920bf3fee6b2fcf35db7cce\": container with ID starting with 320c7eaeb32e313405a02e9d1abd551d557608766920bf3fee6b2fcf35db7cce not found: ID does not exist" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.277480 4870 scope.go:117] "RemoveContainer" containerID="89fb7221dbe78129bb53078f7616360cbaf9865d9ef2025c23a28bbe4963f09c" Feb 16 17:23:18 crc kubenswrapper[4870]: E0216 17:23:18.277819 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89fb7221dbe78129bb53078f7616360cbaf9865d9ef2025c23a28bbe4963f09c\": container with ID starting with 89fb7221dbe78129bb53078f7616360cbaf9865d9ef2025c23a28bbe4963f09c not found: ID does not exist" containerID="89fb7221dbe78129bb53078f7616360cbaf9865d9ef2025c23a28bbe4963f09c" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.277846 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89fb7221dbe78129bb53078f7616360cbaf9865d9ef2025c23a28bbe4963f09c"} err="failed to get container status \"89fb7221dbe78129bb53078f7616360cbaf9865d9ef2025c23a28bbe4963f09c\": rpc error: code = NotFound desc = could not find container \"89fb7221dbe78129bb53078f7616360cbaf9865d9ef2025c23a28bbe4963f09c\": container with ID starting with 89fb7221dbe78129bb53078f7616360cbaf9865d9ef2025c23a28bbe4963f09c not found: ID does not exist" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.277862 4870 scope.go:117] "RemoveContainer" containerID="72a5f4f9a0c505bf057c114cad802e1d386eda8f9f70036461a9a0d6c5909991" Feb 16 17:23:18 crc kubenswrapper[4870]: E0216 17:23:18.278154 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72a5f4f9a0c505bf057c114cad802e1d386eda8f9f70036461a9a0d6c5909991\": container with ID starting with 72a5f4f9a0c505bf057c114cad802e1d386eda8f9f70036461a9a0d6c5909991 not found: ID does not exist" containerID="72a5f4f9a0c505bf057c114cad802e1d386eda8f9f70036461a9a0d6c5909991" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.278185 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72a5f4f9a0c505bf057c114cad802e1d386eda8f9f70036461a9a0d6c5909991"} err="failed to get container status \"72a5f4f9a0c505bf057c114cad802e1d386eda8f9f70036461a9a0d6c5909991\": rpc error: code = NotFound desc = could not find container \"72a5f4f9a0c505bf057c114cad802e1d386eda8f9f70036461a9a0d6c5909991\": container with ID starting with 72a5f4f9a0c505bf057c114cad802e1d386eda8f9f70036461a9a0d6c5909991 not found: ID does not exist" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.372063 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.372118 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.372180 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-scripts\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.372234 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl7rj\" (UniqueName: \"kubernetes.io/projected/e3748c4b-9b03-4b60-92ec-62083cb70817-kube-api-access-kl7rj\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.372301 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3748c4b-9b03-4b60-92ec-62083cb70817-log-httpd\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.372348 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3748c4b-9b03-4b60-92ec-62083cb70817-run-httpd\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.372469 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.372631 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-config-data\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.376924 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3748c4b-9b03-4b60-92ec-62083cb70817-run-httpd\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.378753 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.379649 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.379784 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.380425 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3748c4b-9b03-4b60-92ec-62083cb70817-log-httpd\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.380611 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-scripts\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.383115 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-config-data\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.394690 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl7rj\" (UniqueName: \"kubernetes.io/projected/e3748c4b-9b03-4b60-92ec-62083cb70817-kube-api-access-kl7rj\") pod \"ceilometer-0\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " pod="openstack/ceilometer-0" Feb 16 17:23:18 crc kubenswrapper[4870]: I0216 17:23:18.541913 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:23:19 crc kubenswrapper[4870]: I0216 17:23:19.068873 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:19 crc kubenswrapper[4870]: W0216 17:23:19.074872 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3748c4b_9b03_4b60_92ec_62083cb70817.slice/crio-5680e60ad3c54b10ce138e9d8240fca4d347ec1928fce78d3dc115525dada4db WatchSource:0}: Error finding container 5680e60ad3c54b10ce138e9d8240fca4d347ec1928fce78d3dc115525dada4db: Status 404 returned error can't find the container with id 5680e60ad3c54b10ce138e9d8240fca4d347ec1928fce78d3dc115525dada4db Feb 16 17:23:19 crc kubenswrapper[4870]: I0216 17:23:19.146336 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3748c4b-9b03-4b60-92ec-62083cb70817","Type":"ContainerStarted","Data":"5680e60ad3c54b10ce138e9d8240fca4d347ec1928fce78d3dc115525dada4db"} Feb 16 17:23:20 crc kubenswrapper[4870]: I0216 17:23:20.171646 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3748c4b-9b03-4b60-92ec-62083cb70817","Type":"ContainerStarted","Data":"c6db80950133e13de04db72ea23b86ab30292ddcc07fc9bb7d88f06fe360a7f0"} Feb 16 17:23:20 crc kubenswrapper[4870]: I0216 17:23:20.339074 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 16 17:23:20 crc kubenswrapper[4870]: I0216 17:23:20.410691 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 17:23:20 crc kubenswrapper[4870]: I0216 17:23:20.410804 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 17:23:20 crc kubenswrapper[4870]: I0216 17:23:20.636011 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 17:23:21 crc kubenswrapper[4870]: I0216 17:23:21.204316 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3748c4b-9b03-4b60-92ec-62083cb70817","Type":"ContainerStarted","Data":"ced4e42140e8357e9b75b81a09e3e73296eac74c47e2086add46bbf17263ca0e"} Feb 16 17:23:21 crc kubenswrapper[4870]: I0216 17:23:21.382025 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 17:23:21 crc kubenswrapper[4870]: I0216 17:23:21.417686 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 17:23:21 crc kubenswrapper[4870]: I0216 17:23:21.426150 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="56ec8d81-47e5-4aa5-b28d-cc3c69114886" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.218:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:23:21 crc kubenswrapper[4870]: I0216 17:23:21.426413 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="56ec8d81-47e5-4aa5-b28d-cc3c69114886" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.218:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:23:22 crc kubenswrapper[4870]: I0216 17:23:22.217700 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3748c4b-9b03-4b60-92ec-62083cb70817","Type":"ContainerStarted","Data":"386c849778828c9a377e50930556df23b22b8b4accb2587804944c02f91083a0"} Feb 16 17:23:22 crc kubenswrapper[4870]: I0216 17:23:22.252648 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 17:23:23 crc kubenswrapper[4870]: I0216 17:23:23.231635 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3748c4b-9b03-4b60-92ec-62083cb70817","Type":"ContainerStarted","Data":"9d72b11bf11674ab419b9ccad391247f9e47875a3bf9a84ec304b7cde8fe8e03"} Feb 16 17:23:23 crc kubenswrapper[4870]: I0216 17:23:23.271598 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.997033389 podStartE2EDuration="5.271575436s" podCreationTimestamp="2026-02-16 17:23:18 +0000 UTC" firstStartedPulling="2026-02-16 17:23:19.078271965 +0000 UTC m=+1403.561736349" lastFinishedPulling="2026-02-16 17:23:22.352814012 +0000 UTC m=+1406.836278396" observedRunningTime="2026-02-16 17:23:23.265586755 +0000 UTC m=+1407.749051139" watchObservedRunningTime="2026-02-16 17:23:23.271575436 +0000 UTC m=+1407.755039820" Feb 16 17:23:24 crc kubenswrapper[4870]: I0216 17:23:24.245631 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 17:23:25 crc kubenswrapper[4870]: I0216 17:23:25.786362 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 17:23:25 crc kubenswrapper[4870]: I0216 17:23:25.786446 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 17:23:26 crc kubenswrapper[4870]: E0216 17:23:26.359884 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:23:26 crc kubenswrapper[4870]: E0216 17:23:26.360167 4870 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:23:26 crc kubenswrapper[4870]: E0216 17:23:26.360277 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7b2p7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6hkdm_openstack(34a86750-1fff-4add-8462-7ab805ec7f89): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:23:26 crc kubenswrapper[4870]: E0216 17:23:26.361798 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:23:26 crc kubenswrapper[4870]: I0216 17:23:26.869169 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9e5dae4f-e80d-4c69-9675-591edcf7d6dc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.221:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:23:26 crc kubenswrapper[4870]: I0216 17:23:26.869144 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9e5dae4f-e80d-4c69-9675-591edcf7d6dc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.221:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:23:30 crc kubenswrapper[4870]: I0216 17:23:30.416013 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 17:23:30 crc kubenswrapper[4870]: I0216 17:23:30.421069 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 17:23:30 crc kubenswrapper[4870]: I0216 17:23:30.428006 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 17:23:31 crc kubenswrapper[4870]: I0216 17:23:31.332995 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.027361 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.119777 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9155842-8a8a-4b08-9c59-a0d1ca601473-combined-ca-bundle\") pod \"e9155842-8a8a-4b08-9c59-a0d1ca601473\" (UID: \"e9155842-8a8a-4b08-9c59-a0d1ca601473\") " Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.120254 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zvzw\" (UniqueName: \"kubernetes.io/projected/e9155842-8a8a-4b08-9c59-a0d1ca601473-kube-api-access-6zvzw\") pod \"e9155842-8a8a-4b08-9c59-a0d1ca601473\" (UID: \"e9155842-8a8a-4b08-9c59-a0d1ca601473\") " Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.120412 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9155842-8a8a-4b08-9c59-a0d1ca601473-config-data\") pod \"e9155842-8a8a-4b08-9c59-a0d1ca601473\" (UID: \"e9155842-8a8a-4b08-9c59-a0d1ca601473\") " Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.124853 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9155842-8a8a-4b08-9c59-a0d1ca601473-kube-api-access-6zvzw" (OuterVolumeSpecName: "kube-api-access-6zvzw") pod "e9155842-8a8a-4b08-9c59-a0d1ca601473" (UID: "e9155842-8a8a-4b08-9c59-a0d1ca601473"). InnerVolumeSpecName "kube-api-access-6zvzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.148443 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9155842-8a8a-4b08-9c59-a0d1ca601473-config-data" (OuterVolumeSpecName: "config-data") pod "e9155842-8a8a-4b08-9c59-a0d1ca601473" (UID: "e9155842-8a8a-4b08-9c59-a0d1ca601473"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.151086 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9155842-8a8a-4b08-9c59-a0d1ca601473-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9155842-8a8a-4b08-9c59-a0d1ca601473" (UID: "e9155842-8a8a-4b08-9c59-a0d1ca601473"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.223713 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9155842-8a8a-4b08-9c59-a0d1ca601473-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.223980 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zvzw\" (UniqueName: \"kubernetes.io/projected/e9155842-8a8a-4b08-9c59-a0d1ca601473-kube-api-access-6zvzw\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.224001 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9155842-8a8a-4b08-9c59-a0d1ca601473-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.347196 4870 generic.go:334] "Generic (PLEG): container finished" podID="e9155842-8a8a-4b08-9c59-a0d1ca601473" containerID="2b5fa282fd4daee1481286dd1ef8e29bb2079b54ab1119ebaa1d6a69a461d65c" exitCode=137 Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.348102 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.348181 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e9155842-8a8a-4b08-9c59-a0d1ca601473","Type":"ContainerDied","Data":"2b5fa282fd4daee1481286dd1ef8e29bb2079b54ab1119ebaa1d6a69a461d65c"} Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.348249 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e9155842-8a8a-4b08-9c59-a0d1ca601473","Type":"ContainerDied","Data":"06e7ed156bd021c44c51c7d9561c8aef5551a61589c6e3e59f36f64b56687034"} Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.348288 4870 scope.go:117] "RemoveContainer" containerID="2b5fa282fd4daee1481286dd1ef8e29bb2079b54ab1119ebaa1d6a69a461d65c" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.399309 4870 scope.go:117] "RemoveContainer" containerID="2b5fa282fd4daee1481286dd1ef8e29bb2079b54ab1119ebaa1d6a69a461d65c" Feb 16 17:23:33 crc kubenswrapper[4870]: E0216 17:23:33.400137 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b5fa282fd4daee1481286dd1ef8e29bb2079b54ab1119ebaa1d6a69a461d65c\": container with ID starting with 2b5fa282fd4daee1481286dd1ef8e29bb2079b54ab1119ebaa1d6a69a461d65c not found: ID does not exist" containerID="2b5fa282fd4daee1481286dd1ef8e29bb2079b54ab1119ebaa1d6a69a461d65c" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.400188 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b5fa282fd4daee1481286dd1ef8e29bb2079b54ab1119ebaa1d6a69a461d65c"} err="failed to get container status \"2b5fa282fd4daee1481286dd1ef8e29bb2079b54ab1119ebaa1d6a69a461d65c\": rpc error: code = NotFound desc = could not find container \"2b5fa282fd4daee1481286dd1ef8e29bb2079b54ab1119ebaa1d6a69a461d65c\": container with ID starting with 2b5fa282fd4daee1481286dd1ef8e29bb2079b54ab1119ebaa1d6a69a461d65c not found: ID does not exist" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.418432 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.445642 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.455031 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:23:33 crc kubenswrapper[4870]: E0216 17:23:33.455737 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9155842-8a8a-4b08-9c59-a0d1ca601473" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.455761 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9155842-8a8a-4b08-9c59-a0d1ca601473" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.456149 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9155842-8a8a-4b08-9c59-a0d1ca601473" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.457278 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.459141 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.460729 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.460784 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.466286 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.531065 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csrm7\" (UniqueName: \"kubernetes.io/projected/9d8c610c-2125-423f-a856-03f0aeebc8fc-kube-api-access-csrm7\") pod \"nova-cell1-novncproxy-0\" (UID: \"9d8c610c-2125-423f-a856-03f0aeebc8fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.531152 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d8c610c-2125-423f-a856-03f0aeebc8fc-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9d8c610c-2125-423f-a856-03f0aeebc8fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.531282 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d8c610c-2125-423f-a856-03f0aeebc8fc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9d8c610c-2125-423f-a856-03f0aeebc8fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.531315 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d8c610c-2125-423f-a856-03f0aeebc8fc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9d8c610c-2125-423f-a856-03f0aeebc8fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.531450 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d8c610c-2125-423f-a856-03f0aeebc8fc-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9d8c610c-2125-423f-a856-03f0aeebc8fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.633676 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d8c610c-2125-423f-a856-03f0aeebc8fc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9d8c610c-2125-423f-a856-03f0aeebc8fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.633738 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d8c610c-2125-423f-a856-03f0aeebc8fc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9d8c610c-2125-423f-a856-03f0aeebc8fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.633768 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d8c610c-2125-423f-a856-03f0aeebc8fc-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9d8c610c-2125-423f-a856-03f0aeebc8fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.633851 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csrm7\" (UniqueName: \"kubernetes.io/projected/9d8c610c-2125-423f-a856-03f0aeebc8fc-kube-api-access-csrm7\") pod \"nova-cell1-novncproxy-0\" (UID: \"9d8c610c-2125-423f-a856-03f0aeebc8fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.633913 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d8c610c-2125-423f-a856-03f0aeebc8fc-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9d8c610c-2125-423f-a856-03f0aeebc8fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.639977 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d8c610c-2125-423f-a856-03f0aeebc8fc-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9d8c610c-2125-423f-a856-03f0aeebc8fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.640796 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d8c610c-2125-423f-a856-03f0aeebc8fc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9d8c610c-2125-423f-a856-03f0aeebc8fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.640811 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d8c610c-2125-423f-a856-03f0aeebc8fc-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9d8c610c-2125-423f-a856-03f0aeebc8fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.650683 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d8c610c-2125-423f-a856-03f0aeebc8fc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9d8c610c-2125-423f-a856-03f0aeebc8fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.655025 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csrm7\" (UniqueName: \"kubernetes.io/projected/9d8c610c-2125-423f-a856-03f0aeebc8fc-kube-api-access-csrm7\") pod \"nova-cell1-novncproxy-0\" (UID: \"9d8c610c-2125-423f-a856-03f0aeebc8fc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:33 crc kubenswrapper[4870]: I0216 17:23:33.779301 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:34 crc kubenswrapper[4870]: W0216 17:23:34.221482 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d8c610c_2125_423f_a856_03f0aeebc8fc.slice/crio-fa56de264b6a6eb416209b1b647a9ccd21a5093b768b1888ce9e8eda16a3c89a WatchSource:0}: Error finding container fa56de264b6a6eb416209b1b647a9ccd21a5093b768b1888ce9e8eda16a3c89a: Status 404 returned error can't find the container with id fa56de264b6a6eb416209b1b647a9ccd21a5093b768b1888ce9e8eda16a3c89a Feb 16 17:23:34 crc kubenswrapper[4870]: I0216 17:23:34.221612 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:23:34 crc kubenswrapper[4870]: I0216 17:23:34.235779 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9155842-8a8a-4b08-9c59-a0d1ca601473" path="/var/lib/kubelet/pods/e9155842-8a8a-4b08-9c59-a0d1ca601473/volumes" Feb 16 17:23:34 crc kubenswrapper[4870]: I0216 17:23:34.365707 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9d8c610c-2125-423f-a856-03f0aeebc8fc","Type":"ContainerStarted","Data":"fa56de264b6a6eb416209b1b647a9ccd21a5093b768b1888ce9e8eda16a3c89a"} Feb 16 17:23:35 crc kubenswrapper[4870]: I0216 17:23:35.382003 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9d8c610c-2125-423f-a856-03f0aeebc8fc","Type":"ContainerStarted","Data":"f0fd01bf022749dad9ee9e302b237c0cfdce0d9c43cf7f7b9ee819041a2eed1b"} Feb 16 17:23:35 crc kubenswrapper[4870]: I0216 17:23:35.408147 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.408124818 podStartE2EDuration="2.408124818s" podCreationTimestamp="2026-02-16 17:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:35.402245051 +0000 UTC m=+1419.885709475" watchObservedRunningTime="2026-02-16 17:23:35.408124818 +0000 UTC m=+1419.891589212" Feb 16 17:23:35 crc kubenswrapper[4870]: I0216 17:23:35.790201 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 17:23:35 crc kubenswrapper[4870]: I0216 17:23:35.790775 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 17:23:35 crc kubenswrapper[4870]: I0216 17:23:35.792511 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 17:23:35 crc kubenswrapper[4870]: I0216 17:23:35.795378 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.396797 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.402433 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.616999 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-n8bg4"] Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.623578 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.687664 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-n8bg4"] Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.799322 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/babf44d3-8b05-43fa-8c73-bb2ade1d08dd-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-n8bg4\" (UID: \"babf44d3-8b05-43fa-8c73-bb2ade1d08dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.799382 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffz4f\" (UniqueName: \"kubernetes.io/projected/babf44d3-8b05-43fa-8c73-bb2ade1d08dd-kube-api-access-ffz4f\") pod \"dnsmasq-dns-89c5cd4d5-n8bg4\" (UID: \"babf44d3-8b05-43fa-8c73-bb2ade1d08dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.799725 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/babf44d3-8b05-43fa-8c73-bb2ade1d08dd-config\") pod \"dnsmasq-dns-89c5cd4d5-n8bg4\" (UID: \"babf44d3-8b05-43fa-8c73-bb2ade1d08dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.800147 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/babf44d3-8b05-43fa-8c73-bb2ade1d08dd-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-n8bg4\" (UID: \"babf44d3-8b05-43fa-8c73-bb2ade1d08dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.800311 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/babf44d3-8b05-43fa-8c73-bb2ade1d08dd-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-n8bg4\" (UID: \"babf44d3-8b05-43fa-8c73-bb2ade1d08dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.800400 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/babf44d3-8b05-43fa-8c73-bb2ade1d08dd-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-n8bg4\" (UID: \"babf44d3-8b05-43fa-8c73-bb2ade1d08dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.902811 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/babf44d3-8b05-43fa-8c73-bb2ade1d08dd-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-n8bg4\" (UID: \"babf44d3-8b05-43fa-8c73-bb2ade1d08dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.902866 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffz4f\" (UniqueName: \"kubernetes.io/projected/babf44d3-8b05-43fa-8c73-bb2ade1d08dd-kube-api-access-ffz4f\") pod \"dnsmasq-dns-89c5cd4d5-n8bg4\" (UID: \"babf44d3-8b05-43fa-8c73-bb2ade1d08dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.902916 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/babf44d3-8b05-43fa-8c73-bb2ade1d08dd-config\") pod \"dnsmasq-dns-89c5cd4d5-n8bg4\" (UID: \"babf44d3-8b05-43fa-8c73-bb2ade1d08dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.902981 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/babf44d3-8b05-43fa-8c73-bb2ade1d08dd-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-n8bg4\" (UID: \"babf44d3-8b05-43fa-8c73-bb2ade1d08dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.903894 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/babf44d3-8b05-43fa-8c73-bb2ade1d08dd-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-n8bg4\" (UID: \"babf44d3-8b05-43fa-8c73-bb2ade1d08dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.903893 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/babf44d3-8b05-43fa-8c73-bb2ade1d08dd-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-n8bg4\" (UID: \"babf44d3-8b05-43fa-8c73-bb2ade1d08dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.904021 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/babf44d3-8b05-43fa-8c73-bb2ade1d08dd-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-n8bg4\" (UID: \"babf44d3-8b05-43fa-8c73-bb2ade1d08dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.904084 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/babf44d3-8b05-43fa-8c73-bb2ade1d08dd-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-n8bg4\" (UID: \"babf44d3-8b05-43fa-8c73-bb2ade1d08dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.904253 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/babf44d3-8b05-43fa-8c73-bb2ade1d08dd-config\") pod \"dnsmasq-dns-89c5cd4d5-n8bg4\" (UID: \"babf44d3-8b05-43fa-8c73-bb2ade1d08dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.904647 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/babf44d3-8b05-43fa-8c73-bb2ade1d08dd-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-n8bg4\" (UID: \"babf44d3-8b05-43fa-8c73-bb2ade1d08dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.904872 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/babf44d3-8b05-43fa-8c73-bb2ade1d08dd-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-n8bg4\" (UID: \"babf44d3-8b05-43fa-8c73-bb2ade1d08dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.933020 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffz4f\" (UniqueName: \"kubernetes.io/projected/babf44d3-8b05-43fa-8c73-bb2ade1d08dd-kube-api-access-ffz4f\") pod \"dnsmasq-dns-89c5cd4d5-n8bg4\" (UID: \"babf44d3-8b05-43fa-8c73-bb2ade1d08dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:36 crc kubenswrapper[4870]: I0216 17:23:36.967894 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:37 crc kubenswrapper[4870]: I0216 17:23:37.491791 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-n8bg4"] Feb 16 17:23:38 crc kubenswrapper[4870]: I0216 17:23:38.417604 4870 generic.go:334] "Generic (PLEG): container finished" podID="babf44d3-8b05-43fa-8c73-bb2ade1d08dd" containerID="eff02109ae1e9cb4c1fcd56498ba603792a87d3ceab9d50147c674627eb8a01b" exitCode=0 Feb 16 17:23:38 crc kubenswrapper[4870]: I0216 17:23:38.417696 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" event={"ID":"babf44d3-8b05-43fa-8c73-bb2ade1d08dd","Type":"ContainerDied","Data":"eff02109ae1e9cb4c1fcd56498ba603792a87d3ceab9d50147c674627eb8a01b"} Feb 16 17:23:38 crc kubenswrapper[4870]: I0216 17:23:38.418242 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" event={"ID":"babf44d3-8b05-43fa-8c73-bb2ade1d08dd","Type":"ContainerStarted","Data":"a3af7c3a989fb5d94681eadf2804e3a51e94ecb0e89c1948af174a851acd24cf"} Feb 16 17:23:38 crc kubenswrapper[4870]: I0216 17:23:38.779608 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:38 crc kubenswrapper[4870]: I0216 17:23:38.999264 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:38 crc kubenswrapper[4870]: I0216 17:23:38.999671 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3748c4b-9b03-4b60-92ec-62083cb70817" containerName="proxy-httpd" containerID="cri-o://9d72b11bf11674ab419b9ccad391247f9e47875a3bf9a84ec304b7cde8fe8e03" gracePeriod=30 Feb 16 17:23:38 crc kubenswrapper[4870]: I0216 17:23:38.999866 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3748c4b-9b03-4b60-92ec-62083cb70817" containerName="sg-core" containerID="cri-o://386c849778828c9a377e50930556df23b22b8b4accb2587804944c02f91083a0" gracePeriod=30 Feb 16 17:23:39 crc kubenswrapper[4870]: I0216 17:23:39.000082 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3748c4b-9b03-4b60-92ec-62083cb70817" containerName="ceilometer-notification-agent" containerID="cri-o://ced4e42140e8357e9b75b81a09e3e73296eac74c47e2086add46bbf17263ca0e" gracePeriod=30 Feb 16 17:23:39 crc kubenswrapper[4870]: I0216 17:23:39.000010 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3748c4b-9b03-4b60-92ec-62083cb70817" containerName="ceilometer-central-agent" containerID="cri-o://c6db80950133e13de04db72ea23b86ab30292ddcc07fc9bb7d88f06fe360a7f0" gracePeriod=30 Feb 16 17:23:39 crc kubenswrapper[4870]: I0216 17:23:39.010734 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e3748c4b-9b03-4b60-92ec-62083cb70817" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.222:3000/\": EOF" Feb 16 17:23:39 crc kubenswrapper[4870]: I0216 17:23:39.347628 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:23:39 crc kubenswrapper[4870]: I0216 17:23:39.442749 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" event={"ID":"babf44d3-8b05-43fa-8c73-bb2ade1d08dd","Type":"ContainerStarted","Data":"0a7eb4de6e6c9e2206db60c6267ad646899d1499d9116d2f839564b4666136ff"} Feb 16 17:23:39 crc kubenswrapper[4870]: I0216 17:23:39.442937 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:39 crc kubenswrapper[4870]: I0216 17:23:39.446815 4870 generic.go:334] "Generic (PLEG): container finished" podID="e3748c4b-9b03-4b60-92ec-62083cb70817" containerID="9d72b11bf11674ab419b9ccad391247f9e47875a3bf9a84ec304b7cde8fe8e03" exitCode=0 Feb 16 17:23:39 crc kubenswrapper[4870]: I0216 17:23:39.446836 4870 generic.go:334] "Generic (PLEG): container finished" podID="e3748c4b-9b03-4b60-92ec-62083cb70817" containerID="386c849778828c9a377e50930556df23b22b8b4accb2587804944c02f91083a0" exitCode=2 Feb 16 17:23:39 crc kubenswrapper[4870]: I0216 17:23:39.446984 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9e5dae4f-e80d-4c69-9675-591edcf7d6dc" containerName="nova-api-log" containerID="cri-o://0b021b521b5e54b85f3972421f71460da3f0face9c30c89b740c3b4e1ce80fa3" gracePeriod=30 Feb 16 17:23:39 crc kubenswrapper[4870]: I0216 17:23:39.447147 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3748c4b-9b03-4b60-92ec-62083cb70817","Type":"ContainerDied","Data":"9d72b11bf11674ab419b9ccad391247f9e47875a3bf9a84ec304b7cde8fe8e03"} Feb 16 17:23:39 crc kubenswrapper[4870]: I0216 17:23:39.447169 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3748c4b-9b03-4b60-92ec-62083cb70817","Type":"ContainerDied","Data":"386c849778828c9a377e50930556df23b22b8b4accb2587804944c02f91083a0"} Feb 16 17:23:39 crc kubenswrapper[4870]: I0216 17:23:39.447230 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9e5dae4f-e80d-4c69-9675-591edcf7d6dc" containerName="nova-api-api" containerID="cri-o://0548fbc3c18ab4c00a510dd3114c5f0e1f26c280bacc8ff56f77fd1f79898395" gracePeriod=30 Feb 16 17:23:39 crc kubenswrapper[4870]: I0216 17:23:39.465426 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" podStartSLOduration=3.465403398 podStartE2EDuration="3.465403398s" podCreationTimestamp="2026-02-16 17:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:39.461201778 +0000 UTC m=+1423.944666162" watchObservedRunningTime="2026-02-16 17:23:39.465403398 +0000 UTC m=+1423.948867782" Feb 16 17:23:40 crc kubenswrapper[4870]: I0216 17:23:40.459386 4870 generic.go:334] "Generic (PLEG): container finished" podID="e3748c4b-9b03-4b60-92ec-62083cb70817" containerID="c6db80950133e13de04db72ea23b86ab30292ddcc07fc9bb7d88f06fe360a7f0" exitCode=0 Feb 16 17:23:40 crc kubenswrapper[4870]: I0216 17:23:40.459445 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3748c4b-9b03-4b60-92ec-62083cb70817","Type":"ContainerDied","Data":"c6db80950133e13de04db72ea23b86ab30292ddcc07fc9bb7d88f06fe360a7f0"} Feb 16 17:23:40 crc kubenswrapper[4870]: I0216 17:23:40.463325 4870 generic.go:334] "Generic (PLEG): container finished" podID="9e5dae4f-e80d-4c69-9675-591edcf7d6dc" containerID="0b021b521b5e54b85f3972421f71460da3f0face9c30c89b740c3b4e1ce80fa3" exitCode=143 Feb 16 17:23:40 crc kubenswrapper[4870]: I0216 17:23:40.463389 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e5dae4f-e80d-4c69-9675-591edcf7d6dc","Type":"ContainerDied","Data":"0b021b521b5e54b85f3972421f71460da3f0face9c30c89b740c3b4e1ce80fa3"} Feb 16 17:23:41 crc kubenswrapper[4870]: E0216 17:23:41.225071 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.485909 4870 generic.go:334] "Generic (PLEG): container finished" podID="e3748c4b-9b03-4b60-92ec-62083cb70817" containerID="ced4e42140e8357e9b75b81a09e3e73296eac74c47e2086add46bbf17263ca0e" exitCode=0 Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.486070 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3748c4b-9b03-4b60-92ec-62083cb70817","Type":"ContainerDied","Data":"ced4e42140e8357e9b75b81a09e3e73296eac74c47e2086add46bbf17263ca0e"} Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.486277 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3748c4b-9b03-4b60-92ec-62083cb70817","Type":"ContainerDied","Data":"5680e60ad3c54b10ce138e9d8240fca4d347ec1928fce78d3dc115525dada4db"} Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.486300 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5680e60ad3c54b10ce138e9d8240fca4d347ec1928fce78d3dc115525dada4db" Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.541139 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.660852 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-sg-core-conf-yaml\") pod \"e3748c4b-9b03-4b60-92ec-62083cb70817\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.660968 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3748c4b-9b03-4b60-92ec-62083cb70817-log-httpd\") pod \"e3748c4b-9b03-4b60-92ec-62083cb70817\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.661069 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-ceilometer-tls-certs\") pod \"e3748c4b-9b03-4b60-92ec-62083cb70817\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.661120 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-scripts\") pod \"e3748c4b-9b03-4b60-92ec-62083cb70817\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.661214 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-combined-ca-bundle\") pod \"e3748c4b-9b03-4b60-92ec-62083cb70817\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.661265 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3748c4b-9b03-4b60-92ec-62083cb70817-run-httpd\") pod \"e3748c4b-9b03-4b60-92ec-62083cb70817\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.661311 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-config-data\") pod \"e3748c4b-9b03-4b60-92ec-62083cb70817\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.661364 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl7rj\" (UniqueName: \"kubernetes.io/projected/e3748c4b-9b03-4b60-92ec-62083cb70817-kube-api-access-kl7rj\") pod \"e3748c4b-9b03-4b60-92ec-62083cb70817\" (UID: \"e3748c4b-9b03-4b60-92ec-62083cb70817\") " Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.661900 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3748c4b-9b03-4b60-92ec-62083cb70817-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e3748c4b-9b03-4b60-92ec-62083cb70817" (UID: "e3748c4b-9b03-4b60-92ec-62083cb70817"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.662029 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3748c4b-9b03-4b60-92ec-62083cb70817-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e3748c4b-9b03-4b60-92ec-62083cb70817" (UID: "e3748c4b-9b03-4b60-92ec-62083cb70817"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.667354 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-scripts" (OuterVolumeSpecName: "scripts") pod "e3748c4b-9b03-4b60-92ec-62083cb70817" (UID: "e3748c4b-9b03-4b60-92ec-62083cb70817"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.667629 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3748c4b-9b03-4b60-92ec-62083cb70817-kube-api-access-kl7rj" (OuterVolumeSpecName: "kube-api-access-kl7rj") pod "e3748c4b-9b03-4b60-92ec-62083cb70817" (UID: "e3748c4b-9b03-4b60-92ec-62083cb70817"). InnerVolumeSpecName "kube-api-access-kl7rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.697870 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e3748c4b-9b03-4b60-92ec-62083cb70817" (UID: "e3748c4b-9b03-4b60-92ec-62083cb70817"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.728925 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e3748c4b-9b03-4b60-92ec-62083cb70817" (UID: "e3748c4b-9b03-4b60-92ec-62083cb70817"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.745075 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3748c4b-9b03-4b60-92ec-62083cb70817" (UID: "e3748c4b-9b03-4b60-92ec-62083cb70817"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.771706 4870 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.771763 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.771777 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.771798 4870 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3748c4b-9b03-4b60-92ec-62083cb70817-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.771811 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl7rj\" (UniqueName: \"kubernetes.io/projected/e3748c4b-9b03-4b60-92ec-62083cb70817-kube-api-access-kl7rj\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.771827 4870 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.771837 4870 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3748c4b-9b03-4b60-92ec-62083cb70817-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.796626 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-config-data" (OuterVolumeSpecName: "config-data") pod "e3748c4b-9b03-4b60-92ec-62083cb70817" (UID: "e3748c4b-9b03-4b60-92ec-62083cb70817"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:42 crc kubenswrapper[4870]: I0216 17:23:42.874742 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3748c4b-9b03-4b60-92ec-62083cb70817-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.024104 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.078334 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbrgf\" (UniqueName: \"kubernetes.io/projected/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-kube-api-access-zbrgf\") pod \"9e5dae4f-e80d-4c69-9675-591edcf7d6dc\" (UID: \"9e5dae4f-e80d-4c69-9675-591edcf7d6dc\") " Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.078427 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-config-data\") pod \"9e5dae4f-e80d-4c69-9675-591edcf7d6dc\" (UID: \"9e5dae4f-e80d-4c69-9675-591edcf7d6dc\") " Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.078728 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-logs\") pod \"9e5dae4f-e80d-4c69-9675-591edcf7d6dc\" (UID: \"9e5dae4f-e80d-4c69-9675-591edcf7d6dc\") " Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.079160 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-logs" (OuterVolumeSpecName: "logs") pod "9e5dae4f-e80d-4c69-9675-591edcf7d6dc" (UID: "9e5dae4f-e80d-4c69-9675-591edcf7d6dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.079277 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-combined-ca-bundle\") pod \"9e5dae4f-e80d-4c69-9675-591edcf7d6dc\" (UID: \"9e5dae4f-e80d-4c69-9675-591edcf7d6dc\") " Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.080325 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.081854 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-kube-api-access-zbrgf" (OuterVolumeSpecName: "kube-api-access-zbrgf") pod "9e5dae4f-e80d-4c69-9675-591edcf7d6dc" (UID: "9e5dae4f-e80d-4c69-9675-591edcf7d6dc"). InnerVolumeSpecName "kube-api-access-zbrgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.128885 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-config-data" (OuterVolumeSpecName: "config-data") pod "9e5dae4f-e80d-4c69-9675-591edcf7d6dc" (UID: "9e5dae4f-e80d-4c69-9675-591edcf7d6dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.131336 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e5dae4f-e80d-4c69-9675-591edcf7d6dc" (UID: "9e5dae4f-e80d-4c69-9675-591edcf7d6dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.182807 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.183127 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbrgf\" (UniqueName: \"kubernetes.io/projected/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-kube-api-access-zbrgf\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.183214 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e5dae4f-e80d-4c69-9675-591edcf7d6dc-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.518847 4870 generic.go:334] "Generic (PLEG): container finished" podID="9e5dae4f-e80d-4c69-9675-591edcf7d6dc" containerID="0548fbc3c18ab4c00a510dd3114c5f0e1f26c280bacc8ff56f77fd1f79898395" exitCode=0 Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.519353 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.519395 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.519420 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e5dae4f-e80d-4c69-9675-591edcf7d6dc","Type":"ContainerDied","Data":"0548fbc3c18ab4c00a510dd3114c5f0e1f26c280bacc8ff56f77fd1f79898395"} Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.519448 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e5dae4f-e80d-4c69-9675-591edcf7d6dc","Type":"ContainerDied","Data":"d65a73a762aebba2e028fa5666892684c50d1acb9f8df83b0ceb75d62433f04b"} Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.519464 4870 scope.go:117] "RemoveContainer" containerID="0548fbc3c18ab4c00a510dd3114c5f0e1f26c280bacc8ff56f77fd1f79898395" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.553201 4870 scope.go:117] "RemoveContainer" containerID="0b021b521b5e54b85f3972421f71460da3f0face9c30c89b740c3b4e1ce80fa3" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.564805 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.580467 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.581738 4870 scope.go:117] "RemoveContainer" containerID="0548fbc3c18ab4c00a510dd3114c5f0e1f26c280bacc8ff56f77fd1f79898395" Feb 16 17:23:43 crc kubenswrapper[4870]: E0216 17:23:43.582385 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0548fbc3c18ab4c00a510dd3114c5f0e1f26c280bacc8ff56f77fd1f79898395\": container with ID starting with 0548fbc3c18ab4c00a510dd3114c5f0e1f26c280bacc8ff56f77fd1f79898395 not found: ID does not exist" containerID="0548fbc3c18ab4c00a510dd3114c5f0e1f26c280bacc8ff56f77fd1f79898395" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.582524 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0548fbc3c18ab4c00a510dd3114c5f0e1f26c280bacc8ff56f77fd1f79898395"} err="failed to get container status \"0548fbc3c18ab4c00a510dd3114c5f0e1f26c280bacc8ff56f77fd1f79898395\": rpc error: code = NotFound desc = could not find container \"0548fbc3c18ab4c00a510dd3114c5f0e1f26c280bacc8ff56f77fd1f79898395\": container with ID starting with 0548fbc3c18ab4c00a510dd3114c5f0e1f26c280bacc8ff56f77fd1f79898395 not found: ID does not exist" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.582642 4870 scope.go:117] "RemoveContainer" containerID="0b021b521b5e54b85f3972421f71460da3f0face9c30c89b740c3b4e1ce80fa3" Feb 16 17:23:43 crc kubenswrapper[4870]: E0216 17:23:43.583054 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b021b521b5e54b85f3972421f71460da3f0face9c30c89b740c3b4e1ce80fa3\": container with ID starting with 0b021b521b5e54b85f3972421f71460da3f0face9c30c89b740c3b4e1ce80fa3 not found: ID does not exist" containerID="0b021b521b5e54b85f3972421f71460da3f0face9c30c89b740c3b4e1ce80fa3" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.583194 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b021b521b5e54b85f3972421f71460da3f0face9c30c89b740c3b4e1ce80fa3"} err="failed to get container status \"0b021b521b5e54b85f3972421f71460da3f0face9c30c89b740c3b4e1ce80fa3\": rpc error: code = NotFound desc = could not find container \"0b021b521b5e54b85f3972421f71460da3f0face9c30c89b740c3b4e1ce80fa3\": container with ID starting with 0b021b521b5e54b85f3972421f71460da3f0face9c30c89b740c3b4e1ce80fa3 not found: ID does not exist" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.592567 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.611068 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.632248 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 17:23:43 crc kubenswrapper[4870]: E0216 17:23:43.632701 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3748c4b-9b03-4b60-92ec-62083cb70817" containerName="proxy-httpd" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.632718 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3748c4b-9b03-4b60-92ec-62083cb70817" containerName="proxy-httpd" Feb 16 17:23:43 crc kubenswrapper[4870]: E0216 17:23:43.632732 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3748c4b-9b03-4b60-92ec-62083cb70817" containerName="ceilometer-notification-agent" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.632739 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3748c4b-9b03-4b60-92ec-62083cb70817" containerName="ceilometer-notification-agent" Feb 16 17:23:43 crc kubenswrapper[4870]: E0216 17:23:43.632747 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3748c4b-9b03-4b60-92ec-62083cb70817" containerName="ceilometer-central-agent" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.632754 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3748c4b-9b03-4b60-92ec-62083cb70817" containerName="ceilometer-central-agent" Feb 16 17:23:43 crc kubenswrapper[4870]: E0216 17:23:43.632766 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e5dae4f-e80d-4c69-9675-591edcf7d6dc" containerName="nova-api-api" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.632772 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e5dae4f-e80d-4c69-9675-591edcf7d6dc" containerName="nova-api-api" Feb 16 17:23:43 crc kubenswrapper[4870]: E0216 17:23:43.632787 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e5dae4f-e80d-4c69-9675-591edcf7d6dc" containerName="nova-api-log" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.632793 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e5dae4f-e80d-4c69-9675-591edcf7d6dc" containerName="nova-api-log" Feb 16 17:23:43 crc kubenswrapper[4870]: E0216 17:23:43.632806 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3748c4b-9b03-4b60-92ec-62083cb70817" containerName="sg-core" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.632812 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3748c4b-9b03-4b60-92ec-62083cb70817" containerName="sg-core" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.633008 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3748c4b-9b03-4b60-92ec-62083cb70817" containerName="sg-core" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.633027 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e5dae4f-e80d-4c69-9675-591edcf7d6dc" containerName="nova-api-log" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.633038 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3748c4b-9b03-4b60-92ec-62083cb70817" containerName="ceilometer-notification-agent" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.633050 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3748c4b-9b03-4b60-92ec-62083cb70817" containerName="proxy-httpd" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.633060 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e5dae4f-e80d-4c69-9675-591edcf7d6dc" containerName="nova-api-api" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.633078 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3748c4b-9b03-4b60-92ec-62083cb70817" containerName="ceilometer-central-agent" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.634185 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.636327 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.636528 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.637687 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.644075 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.656525 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.664972 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.669183 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.669305 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.669521 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.669714 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.693359 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.693401 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-public-tls-certs\") pod \"nova-api-0\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.693539 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.693603 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf8gf\" (UniqueName: \"kubernetes.io/projected/dff37e62-e402-47dd-bf83-16a6228876b5-kube-api-access-jf8gf\") pod \"nova-api-0\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.693623 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dff37e62-e402-47dd-bf83-16a6228876b5-logs\") pod \"nova-api-0\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.693643 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-config-data\") pod \"nova-api-0\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.781089 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.795784 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.795825 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-scripts\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.795861 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.795882 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-log-httpd\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.796166 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6przh\" (UniqueName: \"kubernetes.io/projected/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-kube-api-access-6przh\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.796271 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.796332 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-public-tls-certs\") pod \"nova-api-0\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.796407 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-run-httpd\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.796584 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-config-data\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.796724 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.796859 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.797145 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf8gf\" (UniqueName: \"kubernetes.io/projected/dff37e62-e402-47dd-bf83-16a6228876b5-kube-api-access-jf8gf\") pod \"nova-api-0\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.797244 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dff37e62-e402-47dd-bf83-16a6228876b5-logs\") pod \"nova-api-0\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.797318 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-config-data\") pod \"nova-api-0\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.798144 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dff37e62-e402-47dd-bf83-16a6228876b5-logs\") pod \"nova-api-0\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.802589 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.802691 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.802794 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-config-data\") pod \"nova-api-0\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.802906 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.803180 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-public-tls-certs\") pod \"nova-api-0\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.832772 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf8gf\" (UniqueName: \"kubernetes.io/projected/dff37e62-e402-47dd-bf83-16a6228876b5-kube-api-access-jf8gf\") pod \"nova-api-0\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.899401 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-config-data\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.899469 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.899587 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.899607 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-scripts\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.899628 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.899646 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-log-httpd\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.899695 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6przh\" (UniqueName: \"kubernetes.io/projected/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-kube-api-access-6przh\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.899725 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-run-httpd\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.900154 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-run-httpd\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.900773 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-log-httpd\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.905040 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-scripts\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.905450 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-config-data\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.907644 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.908250 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.912838 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.921673 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6przh\" (UniqueName: \"kubernetes.io/projected/e0dd084b-7f2e-42bf-b06d-71ffdfaa195a-kube-api-access-6przh\") pod \"ceilometer-0\" (UID: \"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a\") " pod="openstack/ceilometer-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.954750 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:23:43 crc kubenswrapper[4870]: I0216 17:23:43.988236 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.252914 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e5dae4f-e80d-4c69-9675-591edcf7d6dc" path="/var/lib/kubelet/pods/9e5dae4f-e80d-4c69-9675-591edcf7d6dc/volumes" Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.254291 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3748c4b-9b03-4b60-92ec-62083cb70817" path="/var/lib/kubelet/pods/e3748c4b-9b03-4b60-92ec-62083cb70817/volumes" Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.488089 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.535590 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dff37e62-e402-47dd-bf83-16a6228876b5","Type":"ContainerStarted","Data":"0427a48ef2c943df38c0fcb918a2c96a9838bae2f904e4d58437c9c1f492b4f1"} Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.560390 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.582837 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.721650 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-qqqc5"] Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.723511 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qqqc5" Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.727226 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.727532 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.733537 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qqqc5"] Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.820868 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhr7z\" (UniqueName: \"kubernetes.io/projected/92dd7a33-0622-468a-b385-e66dec6d559e-kube-api-access-jhr7z\") pod \"nova-cell1-cell-mapping-qqqc5\" (UID: \"92dd7a33-0622-468a-b385-e66dec6d559e\") " pod="openstack/nova-cell1-cell-mapping-qqqc5" Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.820931 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92dd7a33-0622-468a-b385-e66dec6d559e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qqqc5\" (UID: \"92dd7a33-0622-468a-b385-e66dec6d559e\") " pod="openstack/nova-cell1-cell-mapping-qqqc5" Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.820998 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92dd7a33-0622-468a-b385-e66dec6d559e-scripts\") pod \"nova-cell1-cell-mapping-qqqc5\" (UID: \"92dd7a33-0622-468a-b385-e66dec6d559e\") " pod="openstack/nova-cell1-cell-mapping-qqqc5" Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.821055 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92dd7a33-0622-468a-b385-e66dec6d559e-config-data\") pod \"nova-cell1-cell-mapping-qqqc5\" (UID: \"92dd7a33-0622-468a-b385-e66dec6d559e\") " pod="openstack/nova-cell1-cell-mapping-qqqc5" Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.922801 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92dd7a33-0622-468a-b385-e66dec6d559e-config-data\") pod \"nova-cell1-cell-mapping-qqqc5\" (UID: \"92dd7a33-0622-468a-b385-e66dec6d559e\") " pod="openstack/nova-cell1-cell-mapping-qqqc5" Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.923254 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhr7z\" (UniqueName: \"kubernetes.io/projected/92dd7a33-0622-468a-b385-e66dec6d559e-kube-api-access-jhr7z\") pod \"nova-cell1-cell-mapping-qqqc5\" (UID: \"92dd7a33-0622-468a-b385-e66dec6d559e\") " pod="openstack/nova-cell1-cell-mapping-qqqc5" Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.923310 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92dd7a33-0622-468a-b385-e66dec6d559e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qqqc5\" (UID: \"92dd7a33-0622-468a-b385-e66dec6d559e\") " pod="openstack/nova-cell1-cell-mapping-qqqc5" Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.923355 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92dd7a33-0622-468a-b385-e66dec6d559e-scripts\") pod \"nova-cell1-cell-mapping-qqqc5\" (UID: \"92dd7a33-0622-468a-b385-e66dec6d559e\") " pod="openstack/nova-cell1-cell-mapping-qqqc5" Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.927063 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92dd7a33-0622-468a-b385-e66dec6d559e-scripts\") pod \"nova-cell1-cell-mapping-qqqc5\" (UID: \"92dd7a33-0622-468a-b385-e66dec6d559e\") " pod="openstack/nova-cell1-cell-mapping-qqqc5" Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.927369 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92dd7a33-0622-468a-b385-e66dec6d559e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qqqc5\" (UID: \"92dd7a33-0622-468a-b385-e66dec6d559e\") " pod="openstack/nova-cell1-cell-mapping-qqqc5" Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.929148 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92dd7a33-0622-468a-b385-e66dec6d559e-config-data\") pod \"nova-cell1-cell-mapping-qqqc5\" (UID: \"92dd7a33-0622-468a-b385-e66dec6d559e\") " pod="openstack/nova-cell1-cell-mapping-qqqc5" Feb 16 17:23:44 crc kubenswrapper[4870]: I0216 17:23:44.938599 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhr7z\" (UniqueName: \"kubernetes.io/projected/92dd7a33-0622-468a-b385-e66dec6d559e-kube-api-access-jhr7z\") pod \"nova-cell1-cell-mapping-qqqc5\" (UID: \"92dd7a33-0622-468a-b385-e66dec6d559e\") " pod="openstack/nova-cell1-cell-mapping-qqqc5" Feb 16 17:23:45 crc kubenswrapper[4870]: I0216 17:23:45.044332 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qqqc5" Feb 16 17:23:45 crc kubenswrapper[4870]: I0216 17:23:45.526108 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qqqc5"] Feb 16 17:23:45 crc kubenswrapper[4870]: W0216 17:23:45.536183 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92dd7a33_0622_468a_b385_e66dec6d559e.slice/crio-61d6878b5ef79486293b42f501d6f95d1a662cf1823a9bbaa5adac2fa20e8611 WatchSource:0}: Error finding container 61d6878b5ef79486293b42f501d6f95d1a662cf1823a9bbaa5adac2fa20e8611: Status 404 returned error can't find the container with id 61d6878b5ef79486293b42f501d6f95d1a662cf1823a9bbaa5adac2fa20e8611 Feb 16 17:23:45 crc kubenswrapper[4870]: I0216 17:23:45.574684 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dff37e62-e402-47dd-bf83-16a6228876b5","Type":"ContainerStarted","Data":"15da6f84ca0e53164c79dafea9a3105c7308914f841655159c3eb0e70f0b1f65"} Feb 16 17:23:45 crc kubenswrapper[4870]: I0216 17:23:45.574972 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dff37e62-e402-47dd-bf83-16a6228876b5","Type":"ContainerStarted","Data":"9ba2aa677999f44094ddc8ecf4cb1a61b9641d4956ed3f382ca65da9388189e7"} Feb 16 17:23:45 crc kubenswrapper[4870]: I0216 17:23:45.576209 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a","Type":"ContainerStarted","Data":"cc4e0611b9a12b9f50c75efc9ad2f7a9cab3576c3ca80769926b5b9bbb4534e2"} Feb 16 17:23:45 crc kubenswrapper[4870]: I0216 17:23:45.577390 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qqqc5" event={"ID":"92dd7a33-0622-468a-b385-e66dec6d559e","Type":"ContainerStarted","Data":"61d6878b5ef79486293b42f501d6f95d1a662cf1823a9bbaa5adac2fa20e8611"} Feb 16 17:23:46 crc kubenswrapper[4870]: I0216 17:23:46.263378 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.263360233 podStartE2EDuration="3.263360233s" podCreationTimestamp="2026-02-16 17:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:45.606550023 +0000 UTC m=+1430.090014527" watchObservedRunningTime="2026-02-16 17:23:46.263360233 +0000 UTC m=+1430.746824617" Feb 16 17:23:46 crc kubenswrapper[4870]: I0216 17:23:46.590808 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qqqc5" event={"ID":"92dd7a33-0622-468a-b385-e66dec6d559e","Type":"ContainerStarted","Data":"78af0f0b774cce6a4dc1d1cf7215a91cd4adae279a40618cee639a3bf30b79a1"} Feb 16 17:23:46 crc kubenswrapper[4870]: I0216 17:23:46.593857 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a","Type":"ContainerStarted","Data":"a9472d31ef8c603e0850112b3e56c3c288497681c3ea7c1198dc1733608b983e"} Feb 16 17:23:46 crc kubenswrapper[4870]: I0216 17:23:46.593902 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a","Type":"ContainerStarted","Data":"b6e9bb3a39a2e3a3a627c966e60b94f5737446d684c82647afcf6ed0b480aa35"} Feb 16 17:23:46 crc kubenswrapper[4870]: I0216 17:23:46.615171 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-qqqc5" podStartSLOduration=2.61514981 podStartE2EDuration="2.61514981s" podCreationTimestamp="2026-02-16 17:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:46.606794621 +0000 UTC m=+1431.090259025" watchObservedRunningTime="2026-02-16 17:23:46.61514981 +0000 UTC m=+1431.098614194" Feb 16 17:23:46 crc kubenswrapper[4870]: I0216 17:23:46.969406 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-n8bg4" Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.064599 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-7jflt"] Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.064909 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-7jflt" podUID="81940312-121c-4c05-97cc-d15d742518fc" containerName="dnsmasq-dns" containerID="cri-o://0eb45ad9b7c918f116381e1fab55847b2281cc65a4275e742ad8df0312080529" gracePeriod=10 Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.606960 4870 generic.go:334] "Generic (PLEG): container finished" podID="81940312-121c-4c05-97cc-d15d742518fc" containerID="0eb45ad9b7c918f116381e1fab55847b2281cc65a4275e742ad8df0312080529" exitCode=0 Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.607338 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-7jflt" event={"ID":"81940312-121c-4c05-97cc-d15d742518fc","Type":"ContainerDied","Data":"0eb45ad9b7c918f116381e1fab55847b2281cc65a4275e742ad8df0312080529"} Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.607373 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-7jflt" event={"ID":"81940312-121c-4c05-97cc-d15d742518fc","Type":"ContainerDied","Data":"adf8c6acbb07f3ed6a49c023d7ef57ef5224b00638353fd554c24b6c9d2c4a96"} Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.607388 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adf8c6acbb07f3ed6a49c023d7ef57ef5224b00638353fd554c24b6c9d2c4a96" Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.610441 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a","Type":"ContainerStarted","Data":"538c6b69a8f0af35923b5457a1766799f9c5cb32f8868659ac7aa436082da5a8"} Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.702790 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.794830 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-ovsdbserver-nb\") pod \"81940312-121c-4c05-97cc-d15d742518fc\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.794890 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-config\") pod \"81940312-121c-4c05-97cc-d15d742518fc\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.794934 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-dns-swift-storage-0\") pod \"81940312-121c-4c05-97cc-d15d742518fc\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.795125 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-dns-svc\") pod \"81940312-121c-4c05-97cc-d15d742518fc\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.795190 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-ovsdbserver-sb\") pod \"81940312-121c-4c05-97cc-d15d742518fc\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.795246 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ldd6\" (UniqueName: \"kubernetes.io/projected/81940312-121c-4c05-97cc-d15d742518fc-kube-api-access-7ldd6\") pod \"81940312-121c-4c05-97cc-d15d742518fc\" (UID: \"81940312-121c-4c05-97cc-d15d742518fc\") " Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.800524 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81940312-121c-4c05-97cc-d15d742518fc-kube-api-access-7ldd6" (OuterVolumeSpecName: "kube-api-access-7ldd6") pod "81940312-121c-4c05-97cc-d15d742518fc" (UID: "81940312-121c-4c05-97cc-d15d742518fc"). InnerVolumeSpecName "kube-api-access-7ldd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.856679 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "81940312-121c-4c05-97cc-d15d742518fc" (UID: "81940312-121c-4c05-97cc-d15d742518fc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.860322 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "81940312-121c-4c05-97cc-d15d742518fc" (UID: "81940312-121c-4c05-97cc-d15d742518fc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.861311 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "81940312-121c-4c05-97cc-d15d742518fc" (UID: "81940312-121c-4c05-97cc-d15d742518fc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.866448 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "81940312-121c-4c05-97cc-d15d742518fc" (UID: "81940312-121c-4c05-97cc-d15d742518fc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.871441 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-config" (OuterVolumeSpecName: "config") pod "81940312-121c-4c05-97cc-d15d742518fc" (UID: "81940312-121c-4c05-97cc-d15d742518fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.897788 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.897827 4870 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-config\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.897840 4870 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.897852 4870 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.897864 4870 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81940312-121c-4c05-97cc-d15d742518fc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:47 crc kubenswrapper[4870]: I0216 17:23:47.897873 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ldd6\" (UniqueName: \"kubernetes.io/projected/81940312-121c-4c05-97cc-d15d742518fc-kube-api-access-7ldd6\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:48 crc kubenswrapper[4870]: I0216 17:23:48.621485 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-7jflt" Feb 16 17:23:48 crc kubenswrapper[4870]: I0216 17:23:48.672061 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-7jflt"] Feb 16 17:23:48 crc kubenswrapper[4870]: I0216 17:23:48.698262 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-7jflt"] Feb 16 17:23:49 crc kubenswrapper[4870]: I0216 17:23:49.641101 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0dd084b-7f2e-42bf-b06d-71ffdfaa195a","Type":"ContainerStarted","Data":"fcce60d61981b5e9c904bb110a946c4623b67da444371f95c59e054a93a43ac4"} Feb 16 17:23:49 crc kubenswrapper[4870]: I0216 17:23:49.641679 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 17:23:50 crc kubenswrapper[4870]: I0216 17:23:50.243599 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81940312-121c-4c05-97cc-d15d742518fc" path="/var/lib/kubelet/pods/81940312-121c-4c05-97cc-d15d742518fc/volumes" Feb 16 17:23:51 crc kubenswrapper[4870]: I0216 17:23:51.663462 4870 generic.go:334] "Generic (PLEG): container finished" podID="92dd7a33-0622-468a-b385-e66dec6d559e" containerID="78af0f0b774cce6a4dc1d1cf7215a91cd4adae279a40618cee639a3bf30b79a1" exitCode=0 Feb 16 17:23:51 crc kubenswrapper[4870]: I0216 17:23:51.663511 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qqqc5" event={"ID":"92dd7a33-0622-468a-b385-e66dec6d559e","Type":"ContainerDied","Data":"78af0f0b774cce6a4dc1d1cf7215a91cd4adae279a40618cee639a3bf30b79a1"} Feb 16 17:23:51 crc kubenswrapper[4870]: I0216 17:23:51.686449 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.5412579730000004 podStartE2EDuration="8.686430631s" podCreationTimestamp="2026-02-16 17:23:43 +0000 UTC" firstStartedPulling="2026-02-16 17:23:44.587526798 +0000 UTC m=+1429.070991182" lastFinishedPulling="2026-02-16 17:23:48.732699456 +0000 UTC m=+1433.216163840" observedRunningTime="2026-02-16 17:23:49.670811832 +0000 UTC m=+1434.154276236" watchObservedRunningTime="2026-02-16 17:23:51.686430631 +0000 UTC m=+1436.169895015" Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.122884 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qqqc5" Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.223679 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92dd7a33-0622-468a-b385-e66dec6d559e-config-data\") pod \"92dd7a33-0622-468a-b385-e66dec6d559e\" (UID: \"92dd7a33-0622-468a-b385-e66dec6d559e\") " Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.223778 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhr7z\" (UniqueName: \"kubernetes.io/projected/92dd7a33-0622-468a-b385-e66dec6d559e-kube-api-access-jhr7z\") pod \"92dd7a33-0622-468a-b385-e66dec6d559e\" (UID: \"92dd7a33-0622-468a-b385-e66dec6d559e\") " Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.223799 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92dd7a33-0622-468a-b385-e66dec6d559e-combined-ca-bundle\") pod \"92dd7a33-0622-468a-b385-e66dec6d559e\" (UID: \"92dd7a33-0622-468a-b385-e66dec6d559e\") " Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.223932 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92dd7a33-0622-468a-b385-e66dec6d559e-scripts\") pod \"92dd7a33-0622-468a-b385-e66dec6d559e\" (UID: \"92dd7a33-0622-468a-b385-e66dec6d559e\") " Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.237066 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dd7a33-0622-468a-b385-e66dec6d559e-kube-api-access-jhr7z" (OuterVolumeSpecName: "kube-api-access-jhr7z") pod "92dd7a33-0622-468a-b385-e66dec6d559e" (UID: "92dd7a33-0622-468a-b385-e66dec6d559e"). InnerVolumeSpecName "kube-api-access-jhr7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.237079 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dd7a33-0622-468a-b385-e66dec6d559e-scripts" (OuterVolumeSpecName: "scripts") pod "92dd7a33-0622-468a-b385-e66dec6d559e" (UID: "92dd7a33-0622-468a-b385-e66dec6d559e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.256330 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dd7a33-0622-468a-b385-e66dec6d559e-config-data" (OuterVolumeSpecName: "config-data") pod "92dd7a33-0622-468a-b385-e66dec6d559e" (UID: "92dd7a33-0622-468a-b385-e66dec6d559e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.259349 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dd7a33-0622-468a-b385-e66dec6d559e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92dd7a33-0622-468a-b385-e66dec6d559e" (UID: "92dd7a33-0622-468a-b385-e66dec6d559e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.327139 4870 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92dd7a33-0622-468a-b385-e66dec6d559e-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.327437 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92dd7a33-0622-468a-b385-e66dec6d559e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.327451 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhr7z\" (UniqueName: \"kubernetes.io/projected/92dd7a33-0622-468a-b385-e66dec6d559e-kube-api-access-jhr7z\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.327464 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92dd7a33-0622-468a-b385-e66dec6d559e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.687156 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qqqc5" event={"ID":"92dd7a33-0622-468a-b385-e66dec6d559e","Type":"ContainerDied","Data":"61d6878b5ef79486293b42f501d6f95d1a662cf1823a9bbaa5adac2fa20e8611"} Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.687199 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61d6878b5ef79486293b42f501d6f95d1a662cf1823a9bbaa5adac2fa20e8611" Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.687200 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qqqc5" Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.888756 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.889088 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a114e86f-d5ac-47dd-badc-7b42f764964e" containerName="nova-scheduler-scheduler" containerID="cri-o://f668687b87833b2bd6f6e583733d8e7a103d8ad5c064b7577db1329579372f00" gracePeriod=30 Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.907784 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.908034 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dff37e62-e402-47dd-bf83-16a6228876b5" containerName="nova-api-log" containerID="cri-o://9ba2aa677999f44094ddc8ecf4cb1a61b9641d4956ed3f382ca65da9388189e7" gracePeriod=30 Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.908505 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dff37e62-e402-47dd-bf83-16a6228876b5" containerName="nova-api-api" containerID="cri-o://15da6f84ca0e53164c79dafea9a3105c7308914f841655159c3eb0e70f0b1f65" gracePeriod=30 Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.934941 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.939309 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="56ec8d81-47e5-4aa5-b28d-cc3c69114886" containerName="nova-metadata-log" containerID="cri-o://65b3fa2dc9943e461b89652edd2975c6700d29475ac4f4a365031a9dc0778ee2" gracePeriod=30 Feb 16 17:23:53 crc kubenswrapper[4870]: I0216 17:23:53.939451 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="56ec8d81-47e5-4aa5-b28d-cc3c69114886" containerName="nova-metadata-metadata" containerID="cri-o://a7d2dc329d3c0674ae9606b4399bee2a0292c023b1f6b1bd0ad8244f8b403143" gracePeriod=30 Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.570611 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.654968 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-public-tls-certs\") pod \"dff37e62-e402-47dd-bf83-16a6228876b5\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.655059 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dff37e62-e402-47dd-bf83-16a6228876b5-logs\") pod \"dff37e62-e402-47dd-bf83-16a6228876b5\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.655175 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-combined-ca-bundle\") pod \"dff37e62-e402-47dd-bf83-16a6228876b5\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.655197 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-config-data\") pod \"dff37e62-e402-47dd-bf83-16a6228876b5\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.655237 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-internal-tls-certs\") pod \"dff37e62-e402-47dd-bf83-16a6228876b5\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.655315 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf8gf\" (UniqueName: \"kubernetes.io/projected/dff37e62-e402-47dd-bf83-16a6228876b5-kube-api-access-jf8gf\") pod \"dff37e62-e402-47dd-bf83-16a6228876b5\" (UID: \"dff37e62-e402-47dd-bf83-16a6228876b5\") " Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.655993 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dff37e62-e402-47dd-bf83-16a6228876b5-logs" (OuterVolumeSpecName: "logs") pod "dff37e62-e402-47dd-bf83-16a6228876b5" (UID: "dff37e62-e402-47dd-bf83-16a6228876b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.662250 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dff37e62-e402-47dd-bf83-16a6228876b5-kube-api-access-jf8gf" (OuterVolumeSpecName: "kube-api-access-jf8gf") pod "dff37e62-e402-47dd-bf83-16a6228876b5" (UID: "dff37e62-e402-47dd-bf83-16a6228876b5"). InnerVolumeSpecName "kube-api-access-jf8gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.681101 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dff37e62-e402-47dd-bf83-16a6228876b5" (UID: "dff37e62-e402-47dd-bf83-16a6228876b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.705869 4870 generic.go:334] "Generic (PLEG): container finished" podID="56ec8d81-47e5-4aa5-b28d-cc3c69114886" containerID="65b3fa2dc9943e461b89652edd2975c6700d29475ac4f4a365031a9dc0778ee2" exitCode=143 Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.705929 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"56ec8d81-47e5-4aa5-b28d-cc3c69114886","Type":"ContainerDied","Data":"65b3fa2dc9943e461b89652edd2975c6700d29475ac4f4a365031a9dc0778ee2"} Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.709327 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-config-data" (OuterVolumeSpecName: "config-data") pod "dff37e62-e402-47dd-bf83-16a6228876b5" (UID: "dff37e62-e402-47dd-bf83-16a6228876b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.710327 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dff37e62-e402-47dd-bf83-16a6228876b5" (UID: "dff37e62-e402-47dd-bf83-16a6228876b5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.711817 4870 generic.go:334] "Generic (PLEG): container finished" podID="dff37e62-e402-47dd-bf83-16a6228876b5" containerID="15da6f84ca0e53164c79dafea9a3105c7308914f841655159c3eb0e70f0b1f65" exitCode=0 Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.711843 4870 generic.go:334] "Generic (PLEG): container finished" podID="dff37e62-e402-47dd-bf83-16a6228876b5" containerID="9ba2aa677999f44094ddc8ecf4cb1a61b9641d4956ed3f382ca65da9388189e7" exitCode=143 Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.711866 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dff37e62-e402-47dd-bf83-16a6228876b5","Type":"ContainerDied","Data":"15da6f84ca0e53164c79dafea9a3105c7308914f841655159c3eb0e70f0b1f65"} Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.711890 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dff37e62-e402-47dd-bf83-16a6228876b5","Type":"ContainerDied","Data":"9ba2aa677999f44094ddc8ecf4cb1a61b9641d4956ed3f382ca65da9388189e7"} Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.711899 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dff37e62-e402-47dd-bf83-16a6228876b5","Type":"ContainerDied","Data":"0427a48ef2c943df38c0fcb918a2c96a9838bae2f904e4d58437c9c1f492b4f1"} Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.711913 4870 scope.go:117] "RemoveContainer" containerID="15da6f84ca0e53164c79dafea9a3105c7308914f841655159c3eb0e70f0b1f65" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.712093 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.726595 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dff37e62-e402-47dd-bf83-16a6228876b5" (UID: "dff37e62-e402-47dd-bf83-16a6228876b5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.737086 4870 scope.go:117] "RemoveContainer" containerID="9ba2aa677999f44094ddc8ecf4cb1a61b9641d4956ed3f382ca65da9388189e7" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.754236 4870 scope.go:117] "RemoveContainer" containerID="15da6f84ca0e53164c79dafea9a3105c7308914f841655159c3eb0e70f0b1f65" Feb 16 17:23:54 crc kubenswrapper[4870]: E0216 17:23:54.754602 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15da6f84ca0e53164c79dafea9a3105c7308914f841655159c3eb0e70f0b1f65\": container with ID starting with 15da6f84ca0e53164c79dafea9a3105c7308914f841655159c3eb0e70f0b1f65 not found: ID does not exist" containerID="15da6f84ca0e53164c79dafea9a3105c7308914f841655159c3eb0e70f0b1f65" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.754646 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15da6f84ca0e53164c79dafea9a3105c7308914f841655159c3eb0e70f0b1f65"} err="failed to get container status \"15da6f84ca0e53164c79dafea9a3105c7308914f841655159c3eb0e70f0b1f65\": rpc error: code = NotFound desc = could not find container \"15da6f84ca0e53164c79dafea9a3105c7308914f841655159c3eb0e70f0b1f65\": container with ID starting with 15da6f84ca0e53164c79dafea9a3105c7308914f841655159c3eb0e70f0b1f65 not found: ID does not exist" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.754676 4870 scope.go:117] "RemoveContainer" containerID="9ba2aa677999f44094ddc8ecf4cb1a61b9641d4956ed3f382ca65da9388189e7" Feb 16 17:23:54 crc kubenswrapper[4870]: E0216 17:23:54.754922 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ba2aa677999f44094ddc8ecf4cb1a61b9641d4956ed3f382ca65da9388189e7\": container with ID starting with 9ba2aa677999f44094ddc8ecf4cb1a61b9641d4956ed3f382ca65da9388189e7 not found: ID does not exist" containerID="9ba2aa677999f44094ddc8ecf4cb1a61b9641d4956ed3f382ca65da9388189e7" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.754964 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ba2aa677999f44094ddc8ecf4cb1a61b9641d4956ed3f382ca65da9388189e7"} err="failed to get container status \"9ba2aa677999f44094ddc8ecf4cb1a61b9641d4956ed3f382ca65da9388189e7\": rpc error: code = NotFound desc = could not find container \"9ba2aa677999f44094ddc8ecf4cb1a61b9641d4956ed3f382ca65da9388189e7\": container with ID starting with 9ba2aa677999f44094ddc8ecf4cb1a61b9641d4956ed3f382ca65da9388189e7 not found: ID does not exist" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.754988 4870 scope.go:117] "RemoveContainer" containerID="15da6f84ca0e53164c79dafea9a3105c7308914f841655159c3eb0e70f0b1f65" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.755355 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15da6f84ca0e53164c79dafea9a3105c7308914f841655159c3eb0e70f0b1f65"} err="failed to get container status \"15da6f84ca0e53164c79dafea9a3105c7308914f841655159c3eb0e70f0b1f65\": rpc error: code = NotFound desc = could not find container \"15da6f84ca0e53164c79dafea9a3105c7308914f841655159c3eb0e70f0b1f65\": container with ID starting with 15da6f84ca0e53164c79dafea9a3105c7308914f841655159c3eb0e70f0b1f65 not found: ID does not exist" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.755378 4870 scope.go:117] "RemoveContainer" containerID="9ba2aa677999f44094ddc8ecf4cb1a61b9641d4956ed3f382ca65da9388189e7" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.755587 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ba2aa677999f44094ddc8ecf4cb1a61b9641d4956ed3f382ca65da9388189e7"} err="failed to get container status \"9ba2aa677999f44094ddc8ecf4cb1a61b9641d4956ed3f382ca65da9388189e7\": rpc error: code = NotFound desc = could not find container \"9ba2aa677999f44094ddc8ecf4cb1a61b9641d4956ed3f382ca65da9388189e7\": container with ID starting with 9ba2aa677999f44094ddc8ecf4cb1a61b9641d4956ed3f382ca65da9388189e7 not found: ID does not exist" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.757717 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.757736 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.757745 4870 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.757755 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf8gf\" (UniqueName: \"kubernetes.io/projected/dff37e62-e402-47dd-bf83-16a6228876b5-kube-api-access-jf8gf\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.757763 4870 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dff37e62-e402-47dd-bf83-16a6228876b5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:54 crc kubenswrapper[4870]: I0216 17:23:54.757771 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dff37e62-e402-47dd-bf83-16a6228876b5-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.095114 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.107232 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.117964 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 17:23:55 crc kubenswrapper[4870]: E0216 17:23:55.118646 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff37e62-e402-47dd-bf83-16a6228876b5" containerName="nova-api-api" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.118684 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff37e62-e402-47dd-bf83-16a6228876b5" containerName="nova-api-api" Feb 16 17:23:55 crc kubenswrapper[4870]: E0216 17:23:55.118702 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92dd7a33-0622-468a-b385-e66dec6d559e" containerName="nova-manage" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.118714 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="92dd7a33-0622-468a-b385-e66dec6d559e" containerName="nova-manage" Feb 16 17:23:55 crc kubenswrapper[4870]: E0216 17:23:55.118735 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81940312-121c-4c05-97cc-d15d742518fc" containerName="init" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.118746 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="81940312-121c-4c05-97cc-d15d742518fc" containerName="init" Feb 16 17:23:55 crc kubenswrapper[4870]: E0216 17:23:55.118758 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff37e62-e402-47dd-bf83-16a6228876b5" containerName="nova-api-log" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.118771 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff37e62-e402-47dd-bf83-16a6228876b5" containerName="nova-api-log" Feb 16 17:23:55 crc kubenswrapper[4870]: E0216 17:23:55.118842 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81940312-121c-4c05-97cc-d15d742518fc" containerName="dnsmasq-dns" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.118855 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="81940312-121c-4c05-97cc-d15d742518fc" containerName="dnsmasq-dns" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.119204 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="dff37e62-e402-47dd-bf83-16a6228876b5" containerName="nova-api-api" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.119238 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="81940312-121c-4c05-97cc-d15d742518fc" containerName="dnsmasq-dns" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.119256 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="dff37e62-e402-47dd-bf83-16a6228876b5" containerName="nova-api-log" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.119282 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="92dd7a33-0622-468a-b385-e66dec6d559e" containerName="nova-manage" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.121277 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.124650 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.125727 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.125779 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.146594 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.165467 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dd2dad7-e696-4e73-91bf-572ee65c541a-logs\") pod \"nova-api-0\" (UID: \"2dd2dad7-e696-4e73-91bf-572ee65c541a\") " pod="openstack/nova-api-0" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.165701 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd2dad7-e696-4e73-91bf-572ee65c541a-config-data\") pod \"nova-api-0\" (UID: \"2dd2dad7-e696-4e73-91bf-572ee65c541a\") " pod="openstack/nova-api-0" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.166080 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dd2dad7-e696-4e73-91bf-572ee65c541a-public-tls-certs\") pod \"nova-api-0\" (UID: \"2dd2dad7-e696-4e73-91bf-572ee65c541a\") " pod="openstack/nova-api-0" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.166208 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvf84\" (UniqueName: \"kubernetes.io/projected/2dd2dad7-e696-4e73-91bf-572ee65c541a-kube-api-access-vvf84\") pod \"nova-api-0\" (UID: \"2dd2dad7-e696-4e73-91bf-572ee65c541a\") " pod="openstack/nova-api-0" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.166300 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd2dad7-e696-4e73-91bf-572ee65c541a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2dd2dad7-e696-4e73-91bf-572ee65c541a\") " pod="openstack/nova-api-0" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.166392 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dd2dad7-e696-4e73-91bf-572ee65c541a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2dd2dad7-e696-4e73-91bf-572ee65c541a\") " pod="openstack/nova-api-0" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.267682 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dd2dad7-e696-4e73-91bf-572ee65c541a-logs\") pod \"nova-api-0\" (UID: \"2dd2dad7-e696-4e73-91bf-572ee65c541a\") " pod="openstack/nova-api-0" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.267836 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd2dad7-e696-4e73-91bf-572ee65c541a-config-data\") pod \"nova-api-0\" (UID: \"2dd2dad7-e696-4e73-91bf-572ee65c541a\") " pod="openstack/nova-api-0" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.267921 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dd2dad7-e696-4e73-91bf-572ee65c541a-public-tls-certs\") pod \"nova-api-0\" (UID: \"2dd2dad7-e696-4e73-91bf-572ee65c541a\") " pod="openstack/nova-api-0" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.268014 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvf84\" (UniqueName: \"kubernetes.io/projected/2dd2dad7-e696-4e73-91bf-572ee65c541a-kube-api-access-vvf84\") pod \"nova-api-0\" (UID: \"2dd2dad7-e696-4e73-91bf-572ee65c541a\") " pod="openstack/nova-api-0" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.268052 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd2dad7-e696-4e73-91bf-572ee65c541a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2dd2dad7-e696-4e73-91bf-572ee65c541a\") " pod="openstack/nova-api-0" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.268083 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dd2dad7-e696-4e73-91bf-572ee65c541a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2dd2dad7-e696-4e73-91bf-572ee65c541a\") " pod="openstack/nova-api-0" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.268157 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dd2dad7-e696-4e73-91bf-572ee65c541a-logs\") pod \"nova-api-0\" (UID: \"2dd2dad7-e696-4e73-91bf-572ee65c541a\") " pod="openstack/nova-api-0" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.273217 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dd2dad7-e696-4e73-91bf-572ee65c541a-public-tls-certs\") pod \"nova-api-0\" (UID: \"2dd2dad7-e696-4e73-91bf-572ee65c541a\") " pod="openstack/nova-api-0" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.273221 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd2dad7-e696-4e73-91bf-572ee65c541a-config-data\") pod \"nova-api-0\" (UID: \"2dd2dad7-e696-4e73-91bf-572ee65c541a\") " pod="openstack/nova-api-0" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.274227 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2dd2dad7-e696-4e73-91bf-572ee65c541a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2dd2dad7-e696-4e73-91bf-572ee65c541a\") " pod="openstack/nova-api-0" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.275455 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd2dad7-e696-4e73-91bf-572ee65c541a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2dd2dad7-e696-4e73-91bf-572ee65c541a\") " pod="openstack/nova-api-0" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.287605 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvf84\" (UniqueName: \"kubernetes.io/projected/2dd2dad7-e696-4e73-91bf-572ee65c541a-kube-api-access-vvf84\") pod \"nova-api-0\" (UID: \"2dd2dad7-e696-4e73-91bf-572ee65c541a\") " pod="openstack/nova-api-0" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.438512 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:23:55 crc kubenswrapper[4870]: I0216 17:23:55.893041 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:23:55 crc kubenswrapper[4870]: W0216 17:23:55.897339 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dd2dad7_e696_4e73_91bf_572ee65c541a.slice/crio-ca88d715bd1c785af17967dc2df7e2e24124bcb0e5f20688a1ba8f014078bc1f WatchSource:0}: Error finding container ca88d715bd1c785af17967dc2df7e2e24124bcb0e5f20688a1ba8f014078bc1f: Status 404 returned error can't find the container with id ca88d715bd1c785af17967dc2df7e2e24124bcb0e5f20688a1ba8f014078bc1f Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.175744 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:23:56 crc kubenswrapper[4870]: E0216 17:23:56.235045 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.249881 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dff37e62-e402-47dd-bf83-16a6228876b5" path="/var/lib/kubelet/pods/dff37e62-e402-47dd-bf83-16a6228876b5/volumes" Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.296737 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a114e86f-d5ac-47dd-badc-7b42f764964e-config-data\") pod \"a114e86f-d5ac-47dd-badc-7b42f764964e\" (UID: \"a114e86f-d5ac-47dd-badc-7b42f764964e\") " Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.296899 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hr4c\" (UniqueName: \"kubernetes.io/projected/a114e86f-d5ac-47dd-badc-7b42f764964e-kube-api-access-2hr4c\") pod \"a114e86f-d5ac-47dd-badc-7b42f764964e\" (UID: \"a114e86f-d5ac-47dd-badc-7b42f764964e\") " Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.297024 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a114e86f-d5ac-47dd-badc-7b42f764964e-combined-ca-bundle\") pod \"a114e86f-d5ac-47dd-badc-7b42f764964e\" (UID: \"a114e86f-d5ac-47dd-badc-7b42f764964e\") " Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.301732 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a114e86f-d5ac-47dd-badc-7b42f764964e-kube-api-access-2hr4c" (OuterVolumeSpecName: "kube-api-access-2hr4c") pod "a114e86f-d5ac-47dd-badc-7b42f764964e" (UID: "a114e86f-d5ac-47dd-badc-7b42f764964e"). InnerVolumeSpecName "kube-api-access-2hr4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.324050 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a114e86f-d5ac-47dd-badc-7b42f764964e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a114e86f-d5ac-47dd-badc-7b42f764964e" (UID: "a114e86f-d5ac-47dd-badc-7b42f764964e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.362209 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a114e86f-d5ac-47dd-badc-7b42f764964e-config-data" (OuterVolumeSpecName: "config-data") pod "a114e86f-d5ac-47dd-badc-7b42f764964e" (UID: "a114e86f-d5ac-47dd-badc-7b42f764964e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.401433 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hr4c\" (UniqueName: \"kubernetes.io/projected/a114e86f-d5ac-47dd-badc-7b42f764964e-kube-api-access-2hr4c\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.401471 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a114e86f-d5ac-47dd-badc-7b42f764964e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.401480 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a114e86f-d5ac-47dd-badc-7b42f764964e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.755636 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2dd2dad7-e696-4e73-91bf-572ee65c541a","Type":"ContainerStarted","Data":"067f7c7a481af8cf007da20c6942ec81f9c61d02781ff7d071b4cb8590a0de89"} Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.755984 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2dd2dad7-e696-4e73-91bf-572ee65c541a","Type":"ContainerStarted","Data":"f513649425dacbb71839aca35b22e82f7ff4b3bc8ec0bbd45e219e1a1da1352d"} Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.756005 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2dd2dad7-e696-4e73-91bf-572ee65c541a","Type":"ContainerStarted","Data":"ca88d715bd1c785af17967dc2df7e2e24124bcb0e5f20688a1ba8f014078bc1f"} Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.757284 4870 generic.go:334] "Generic (PLEG): container finished" podID="a114e86f-d5ac-47dd-badc-7b42f764964e" containerID="f668687b87833b2bd6f6e583733d8e7a103d8ad5c064b7577db1329579372f00" exitCode=0 Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.757317 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.757319 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a114e86f-d5ac-47dd-badc-7b42f764964e","Type":"ContainerDied","Data":"f668687b87833b2bd6f6e583733d8e7a103d8ad5c064b7577db1329579372f00"} Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.757479 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a114e86f-d5ac-47dd-badc-7b42f764964e","Type":"ContainerDied","Data":"b744b61c17935a2f23d0f0b577f6f60fd1c255909698eebb784a72433ba62263"} Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.757515 4870 scope.go:117] "RemoveContainer" containerID="f668687b87833b2bd6f6e583733d8e7a103d8ad5c064b7577db1329579372f00" Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.776744 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.776725444 podStartE2EDuration="1.776725444s" podCreationTimestamp="2026-02-16 17:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:56.775023195 +0000 UTC m=+1441.258487589" watchObservedRunningTime="2026-02-16 17:23:56.776725444 +0000 UTC m=+1441.260189838" Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.777443 4870 scope.go:117] "RemoveContainer" containerID="f668687b87833b2bd6f6e583733d8e7a103d8ad5c064b7577db1329579372f00" Feb 16 17:23:56 crc kubenswrapper[4870]: E0216 17:23:56.777825 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f668687b87833b2bd6f6e583733d8e7a103d8ad5c064b7577db1329579372f00\": container with ID starting with f668687b87833b2bd6f6e583733d8e7a103d8ad5c064b7577db1329579372f00 not found: ID does not exist" containerID="f668687b87833b2bd6f6e583733d8e7a103d8ad5c064b7577db1329579372f00" Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.777860 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f668687b87833b2bd6f6e583733d8e7a103d8ad5c064b7577db1329579372f00"} err="failed to get container status \"f668687b87833b2bd6f6e583733d8e7a103d8ad5c064b7577db1329579372f00\": rpc error: code = NotFound desc = could not find container \"f668687b87833b2bd6f6e583733d8e7a103d8ad5c064b7577db1329579372f00\": container with ID starting with f668687b87833b2bd6f6e583733d8e7a103d8ad5c064b7577db1329579372f00 not found: ID does not exist" Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.802155 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.814093 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.826172 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:23:56 crc kubenswrapper[4870]: E0216 17:23:56.826711 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a114e86f-d5ac-47dd-badc-7b42f764964e" containerName="nova-scheduler-scheduler" Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.826734 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="a114e86f-d5ac-47dd-badc-7b42f764964e" containerName="nova-scheduler-scheduler" Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.827016 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="a114e86f-d5ac-47dd-badc-7b42f764964e" containerName="nova-scheduler-scheduler" Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.827926 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.834031 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.848075 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.911322 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3a5425-a813-4cb1-8b27-c19fb83c7fbc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ae3a5425-a813-4cb1-8b27-c19fb83c7fbc\") " pod="openstack/nova-scheduler-0" Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.911443 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqvg5\" (UniqueName: \"kubernetes.io/projected/ae3a5425-a813-4cb1-8b27-c19fb83c7fbc-kube-api-access-fqvg5\") pod \"nova-scheduler-0\" (UID: \"ae3a5425-a813-4cb1-8b27-c19fb83c7fbc\") " pod="openstack/nova-scheduler-0" Feb 16 17:23:56 crc kubenswrapper[4870]: I0216 17:23:56.911535 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3a5425-a813-4cb1-8b27-c19fb83c7fbc-config-data\") pod \"nova-scheduler-0\" (UID: \"ae3a5425-a813-4cb1-8b27-c19fb83c7fbc\") " pod="openstack/nova-scheduler-0" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.012808 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqvg5\" (UniqueName: \"kubernetes.io/projected/ae3a5425-a813-4cb1-8b27-c19fb83c7fbc-kube-api-access-fqvg5\") pod \"nova-scheduler-0\" (UID: \"ae3a5425-a813-4cb1-8b27-c19fb83c7fbc\") " pod="openstack/nova-scheduler-0" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.012932 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3a5425-a813-4cb1-8b27-c19fb83c7fbc-config-data\") pod \"nova-scheduler-0\" (UID: \"ae3a5425-a813-4cb1-8b27-c19fb83c7fbc\") " pod="openstack/nova-scheduler-0" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.013040 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3a5425-a813-4cb1-8b27-c19fb83c7fbc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ae3a5425-a813-4cb1-8b27-c19fb83c7fbc\") " pod="openstack/nova-scheduler-0" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.017597 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3a5425-a813-4cb1-8b27-c19fb83c7fbc-config-data\") pod \"nova-scheduler-0\" (UID: \"ae3a5425-a813-4cb1-8b27-c19fb83c7fbc\") " pod="openstack/nova-scheduler-0" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.020085 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3a5425-a813-4cb1-8b27-c19fb83c7fbc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ae3a5425-a813-4cb1-8b27-c19fb83c7fbc\") " pod="openstack/nova-scheduler-0" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.030534 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqvg5\" (UniqueName: \"kubernetes.io/projected/ae3a5425-a813-4cb1-8b27-c19fb83c7fbc-kube-api-access-fqvg5\") pod \"nova-scheduler-0\" (UID: \"ae3a5425-a813-4cb1-8b27-c19fb83c7fbc\") " pod="openstack/nova-scheduler-0" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.085342 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="56ec8d81-47e5-4aa5-b28d-cc3c69114886" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.218:8775/\": read tcp 10.217.0.2:50926->10.217.0.218:8775: read: connection reset by peer" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.085352 4870 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="56ec8d81-47e5-4aa5-b28d-cc3c69114886" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.218:8775/\": read tcp 10.217.0.2:50922->10.217.0.218:8775: read: connection reset by peer" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.153767 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.538281 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:23:57 crc kubenswrapper[4870]: W0216 17:23:57.541902 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae3a5425_a813_4cb1_8b27_c19fb83c7fbc.slice/crio-b28cce83a404eca5eb9311587627b205a4232024679d6b2329a759af0000b6cc WatchSource:0}: Error finding container b28cce83a404eca5eb9311587627b205a4232024679d6b2329a759af0000b6cc: Status 404 returned error can't find the container with id b28cce83a404eca5eb9311587627b205a4232024679d6b2329a759af0000b6cc Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.586743 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.630300 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzkrt\" (UniqueName: \"kubernetes.io/projected/56ec8d81-47e5-4aa5-b28d-cc3c69114886-kube-api-access-bzkrt\") pod \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\" (UID: \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\") " Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.630362 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ec8d81-47e5-4aa5-b28d-cc3c69114886-config-data\") pod \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\" (UID: \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\") " Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.630497 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ec8d81-47e5-4aa5-b28d-cc3c69114886-logs\") pod \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\" (UID: \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\") " Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.630542 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ec8d81-47e5-4aa5-b28d-cc3c69114886-combined-ca-bundle\") pod \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\" (UID: \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\") " Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.630687 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ec8d81-47e5-4aa5-b28d-cc3c69114886-nova-metadata-tls-certs\") pod \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\" (UID: \"56ec8d81-47e5-4aa5-b28d-cc3c69114886\") " Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.631751 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56ec8d81-47e5-4aa5-b28d-cc3c69114886-logs" (OuterVolumeSpecName: "logs") pod "56ec8d81-47e5-4aa5-b28d-cc3c69114886" (UID: "56ec8d81-47e5-4aa5-b28d-cc3c69114886"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.641350 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56ec8d81-47e5-4aa5-b28d-cc3c69114886-kube-api-access-bzkrt" (OuterVolumeSpecName: "kube-api-access-bzkrt") pod "56ec8d81-47e5-4aa5-b28d-cc3c69114886" (UID: "56ec8d81-47e5-4aa5-b28d-cc3c69114886"). InnerVolumeSpecName "kube-api-access-bzkrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.682902 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56ec8d81-47e5-4aa5-b28d-cc3c69114886-config-data" (OuterVolumeSpecName: "config-data") pod "56ec8d81-47e5-4aa5-b28d-cc3c69114886" (UID: "56ec8d81-47e5-4aa5-b28d-cc3c69114886"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.692573 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56ec8d81-47e5-4aa5-b28d-cc3c69114886-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56ec8d81-47e5-4aa5-b28d-cc3c69114886" (UID: "56ec8d81-47e5-4aa5-b28d-cc3c69114886"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.722338 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56ec8d81-47e5-4aa5-b28d-cc3c69114886-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "56ec8d81-47e5-4aa5-b28d-cc3c69114886" (UID: "56ec8d81-47e5-4aa5-b28d-cc3c69114886"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.738307 4870 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ec8d81-47e5-4aa5-b28d-cc3c69114886-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.738350 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzkrt\" (UniqueName: \"kubernetes.io/projected/56ec8d81-47e5-4aa5-b28d-cc3c69114886-kube-api-access-bzkrt\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.738363 4870 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ec8d81-47e5-4aa5-b28d-cc3c69114886-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.738384 4870 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ec8d81-47e5-4aa5-b28d-cc3c69114886-logs\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.738399 4870 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ec8d81-47e5-4aa5-b28d-cc3c69114886-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.769533 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ae3a5425-a813-4cb1-8b27-c19fb83c7fbc","Type":"ContainerStarted","Data":"3d608801d3d51215ae0f405429dc9af55e28a0ddc767f5d5e7523e5800110a4f"} Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.769615 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ae3a5425-a813-4cb1-8b27-c19fb83c7fbc","Type":"ContainerStarted","Data":"b28cce83a404eca5eb9311587627b205a4232024679d6b2329a759af0000b6cc"} Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.773822 4870 generic.go:334] "Generic (PLEG): container finished" podID="56ec8d81-47e5-4aa5-b28d-cc3c69114886" containerID="a7d2dc329d3c0674ae9606b4399bee2a0292c023b1f6b1bd0ad8244f8b403143" exitCode=0 Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.773872 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"56ec8d81-47e5-4aa5-b28d-cc3c69114886","Type":"ContainerDied","Data":"a7d2dc329d3c0674ae9606b4399bee2a0292c023b1f6b1bd0ad8244f8b403143"} Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.773903 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.773918 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"56ec8d81-47e5-4aa5-b28d-cc3c69114886","Type":"ContainerDied","Data":"c8470d0208279950208e94caf3a5c5c729c683d6c287cab9519e8736c32c14c5"} Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.773963 4870 scope.go:117] "RemoveContainer" containerID="a7d2dc329d3c0674ae9606b4399bee2a0292c023b1f6b1bd0ad8244f8b403143" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.792303 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.7922856600000001 podStartE2EDuration="1.79228566s" podCreationTimestamp="2026-02-16 17:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:57.791286871 +0000 UTC m=+1442.274751255" watchObservedRunningTime="2026-02-16 17:23:57.79228566 +0000 UTC m=+1442.275750054" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.809022 4870 scope.go:117] "RemoveContainer" containerID="65b3fa2dc9943e461b89652edd2975c6700d29475ac4f4a365031a9dc0778ee2" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.828283 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.839872 4870 scope.go:117] "RemoveContainer" containerID="a7d2dc329d3c0674ae9606b4399bee2a0292c023b1f6b1bd0ad8244f8b403143" Feb 16 17:23:57 crc kubenswrapper[4870]: E0216 17:23:57.840698 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7d2dc329d3c0674ae9606b4399bee2a0292c023b1f6b1bd0ad8244f8b403143\": container with ID starting with a7d2dc329d3c0674ae9606b4399bee2a0292c023b1f6b1bd0ad8244f8b403143 not found: ID does not exist" containerID="a7d2dc329d3c0674ae9606b4399bee2a0292c023b1f6b1bd0ad8244f8b403143" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.840736 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d2dc329d3c0674ae9606b4399bee2a0292c023b1f6b1bd0ad8244f8b403143"} err="failed to get container status \"a7d2dc329d3c0674ae9606b4399bee2a0292c023b1f6b1bd0ad8244f8b403143\": rpc error: code = NotFound desc = could not find container \"a7d2dc329d3c0674ae9606b4399bee2a0292c023b1f6b1bd0ad8244f8b403143\": container with ID starting with a7d2dc329d3c0674ae9606b4399bee2a0292c023b1f6b1bd0ad8244f8b403143 not found: ID does not exist" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.840766 4870 scope.go:117] "RemoveContainer" containerID="65b3fa2dc9943e461b89652edd2975c6700d29475ac4f4a365031a9dc0778ee2" Feb 16 17:23:57 crc kubenswrapper[4870]: E0216 17:23:57.841028 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65b3fa2dc9943e461b89652edd2975c6700d29475ac4f4a365031a9dc0778ee2\": container with ID starting with 65b3fa2dc9943e461b89652edd2975c6700d29475ac4f4a365031a9dc0778ee2 not found: ID does not exist" containerID="65b3fa2dc9943e461b89652edd2975c6700d29475ac4f4a365031a9dc0778ee2" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.841054 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65b3fa2dc9943e461b89652edd2975c6700d29475ac4f4a365031a9dc0778ee2"} err="failed to get container status \"65b3fa2dc9943e461b89652edd2975c6700d29475ac4f4a365031a9dc0778ee2\": rpc error: code = NotFound desc = could not find container \"65b3fa2dc9943e461b89652edd2975c6700d29475ac4f4a365031a9dc0778ee2\": container with ID starting with 65b3fa2dc9943e461b89652edd2975c6700d29475ac4f4a365031a9dc0778ee2 not found: ID does not exist" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.856259 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.868887 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:23:57 crc kubenswrapper[4870]: E0216 17:23:57.869835 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ec8d81-47e5-4aa5-b28d-cc3c69114886" containerName="nova-metadata-log" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.869852 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ec8d81-47e5-4aa5-b28d-cc3c69114886" containerName="nova-metadata-log" Feb 16 17:23:57 crc kubenswrapper[4870]: E0216 17:23:57.869870 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ec8d81-47e5-4aa5-b28d-cc3c69114886" containerName="nova-metadata-metadata" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.869878 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ec8d81-47e5-4aa5-b28d-cc3c69114886" containerName="nova-metadata-metadata" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.870256 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ec8d81-47e5-4aa5-b28d-cc3c69114886" containerName="nova-metadata-log" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.870303 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ec8d81-47e5-4aa5-b28d-cc3c69114886" containerName="nova-metadata-metadata" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.872060 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.880814 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.884708 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.884922 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.943566 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328df347-b011-47c4-912c-a4eb850c9146-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"328df347-b011-47c4-912c-a4eb850c9146\") " pod="openstack/nova-metadata-0" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.943730 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/328df347-b011-47c4-912c-a4eb850c9146-logs\") pod \"nova-metadata-0\" (UID: \"328df347-b011-47c4-912c-a4eb850c9146\") " pod="openstack/nova-metadata-0" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.944012 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/328df347-b011-47c4-912c-a4eb850c9146-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"328df347-b011-47c4-912c-a4eb850c9146\") " pod="openstack/nova-metadata-0" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.944117 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/328df347-b011-47c4-912c-a4eb850c9146-config-data\") pod \"nova-metadata-0\" (UID: \"328df347-b011-47c4-912c-a4eb850c9146\") " pod="openstack/nova-metadata-0" Feb 16 17:23:57 crc kubenswrapper[4870]: I0216 17:23:57.944252 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjp4k\" (UniqueName: \"kubernetes.io/projected/328df347-b011-47c4-912c-a4eb850c9146-kube-api-access-bjp4k\") pod \"nova-metadata-0\" (UID: \"328df347-b011-47c4-912c-a4eb850c9146\") " pod="openstack/nova-metadata-0" Feb 16 17:23:58 crc kubenswrapper[4870]: I0216 17:23:58.045896 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328df347-b011-47c4-912c-a4eb850c9146-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"328df347-b011-47c4-912c-a4eb850c9146\") " pod="openstack/nova-metadata-0" Feb 16 17:23:58 crc kubenswrapper[4870]: I0216 17:23:58.045994 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/328df347-b011-47c4-912c-a4eb850c9146-logs\") pod \"nova-metadata-0\" (UID: \"328df347-b011-47c4-912c-a4eb850c9146\") " pod="openstack/nova-metadata-0" Feb 16 17:23:58 crc kubenswrapper[4870]: I0216 17:23:58.046068 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/328df347-b011-47c4-912c-a4eb850c9146-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"328df347-b011-47c4-912c-a4eb850c9146\") " pod="openstack/nova-metadata-0" Feb 16 17:23:58 crc kubenswrapper[4870]: I0216 17:23:58.046107 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/328df347-b011-47c4-912c-a4eb850c9146-config-data\") pod \"nova-metadata-0\" (UID: \"328df347-b011-47c4-912c-a4eb850c9146\") " pod="openstack/nova-metadata-0" Feb 16 17:23:58 crc kubenswrapper[4870]: I0216 17:23:58.046147 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjp4k\" (UniqueName: \"kubernetes.io/projected/328df347-b011-47c4-912c-a4eb850c9146-kube-api-access-bjp4k\") pod \"nova-metadata-0\" (UID: \"328df347-b011-47c4-912c-a4eb850c9146\") " pod="openstack/nova-metadata-0" Feb 16 17:23:58 crc kubenswrapper[4870]: I0216 17:23:58.046538 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/328df347-b011-47c4-912c-a4eb850c9146-logs\") pod \"nova-metadata-0\" (UID: \"328df347-b011-47c4-912c-a4eb850c9146\") " pod="openstack/nova-metadata-0" Feb 16 17:23:58 crc kubenswrapper[4870]: I0216 17:23:58.049716 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/328df347-b011-47c4-912c-a4eb850c9146-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"328df347-b011-47c4-912c-a4eb850c9146\") " pod="openstack/nova-metadata-0" Feb 16 17:23:58 crc kubenswrapper[4870]: I0216 17:23:58.049800 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328df347-b011-47c4-912c-a4eb850c9146-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"328df347-b011-47c4-912c-a4eb850c9146\") " pod="openstack/nova-metadata-0" Feb 16 17:23:58 crc kubenswrapper[4870]: I0216 17:23:58.051573 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/328df347-b011-47c4-912c-a4eb850c9146-config-data\") pod \"nova-metadata-0\" (UID: \"328df347-b011-47c4-912c-a4eb850c9146\") " pod="openstack/nova-metadata-0" Feb 16 17:23:58 crc kubenswrapper[4870]: I0216 17:23:58.064552 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjp4k\" (UniqueName: \"kubernetes.io/projected/328df347-b011-47c4-912c-a4eb850c9146-kube-api-access-bjp4k\") pod \"nova-metadata-0\" (UID: \"328df347-b011-47c4-912c-a4eb850c9146\") " pod="openstack/nova-metadata-0" Feb 16 17:23:58 crc kubenswrapper[4870]: I0216 17:23:58.205608 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:23:58 crc kubenswrapper[4870]: I0216 17:23:58.235803 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56ec8d81-47e5-4aa5-b28d-cc3c69114886" path="/var/lib/kubelet/pods/56ec8d81-47e5-4aa5-b28d-cc3c69114886/volumes" Feb 16 17:23:58 crc kubenswrapper[4870]: I0216 17:23:58.236526 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a114e86f-d5ac-47dd-badc-7b42f764964e" path="/var/lib/kubelet/pods/a114e86f-d5ac-47dd-badc-7b42f764964e/volumes" Feb 16 17:23:58 crc kubenswrapper[4870]: I0216 17:23:58.568684 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:23:58 crc kubenswrapper[4870]: I0216 17:23:58.788835 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"328df347-b011-47c4-912c-a4eb850c9146","Type":"ContainerStarted","Data":"0e669e8fac6d670de3fc5cdecb774da81a37134d4b930fcb08c219f25c42d8a2"} Feb 16 17:23:58 crc kubenswrapper[4870]: I0216 17:23:58.789223 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"328df347-b011-47c4-912c-a4eb850c9146","Type":"ContainerStarted","Data":"37d4ada6d97e5abf4ff2f2eadcb5f24f90e42a992a050441e45ed051b19915b1"} Feb 16 17:23:59 crc kubenswrapper[4870]: I0216 17:23:59.806469 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"328df347-b011-47c4-912c-a4eb850c9146","Type":"ContainerStarted","Data":"20b254b7566ac248fb2569c78074bccd65f70776c6ad4054ea898357f6163623"} Feb 16 17:23:59 crc kubenswrapper[4870]: I0216 17:23:59.842863 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.842826314 podStartE2EDuration="2.842826314s" podCreationTimestamp="2026-02-16 17:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:23:59.837553763 +0000 UTC m=+1444.321018197" watchObservedRunningTime="2026-02-16 17:23:59.842826314 +0000 UTC m=+1444.326290738" Feb 16 17:24:02 crc kubenswrapper[4870]: I0216 17:24:02.154396 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 17:24:03 crc kubenswrapper[4870]: I0216 17:24:03.206549 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 17:24:03 crc kubenswrapper[4870]: I0216 17:24:03.206909 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 17:24:05 crc kubenswrapper[4870]: I0216 17:24:05.366898 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:24:05 crc kubenswrapper[4870]: I0216 17:24:05.367244 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:24:05 crc kubenswrapper[4870]: I0216 17:24:05.438840 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 17:24:05 crc kubenswrapper[4870]: I0216 17:24:05.438908 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 17:24:06 crc kubenswrapper[4870]: I0216 17:24:06.453154 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2dd2dad7-e696-4e73-91bf-572ee65c541a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:24:06 crc kubenswrapper[4870]: I0216 17:24:06.453154 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2dd2dad7-e696-4e73-91bf-572ee65c541a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:24:07 crc kubenswrapper[4870]: I0216 17:24:07.154782 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 17:24:07 crc kubenswrapper[4870]: I0216 17:24:07.190207 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 17:24:07 crc kubenswrapper[4870]: I0216 17:24:07.923386 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 17:24:08 crc kubenswrapper[4870]: I0216 17:24:08.207240 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 17:24:08 crc kubenswrapper[4870]: I0216 17:24:08.207631 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 17:24:08 crc kubenswrapper[4870]: E0216 17:24:08.224542 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:24:09 crc kubenswrapper[4870]: I0216 17:24:09.227152 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="328df347-b011-47c4-912c-a4eb850c9146" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.230:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:24:09 crc kubenswrapper[4870]: I0216 17:24:09.227247 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="328df347-b011-47c4-912c-a4eb850c9146" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.230:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:24:13 crc kubenswrapper[4870]: I0216 17:24:13.996857 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 17:24:15 crc kubenswrapper[4870]: I0216 17:24:15.446063 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 17:24:15 crc kubenswrapper[4870]: I0216 17:24:15.447138 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 17:24:15 crc kubenswrapper[4870]: I0216 17:24:15.447494 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 17:24:15 crc kubenswrapper[4870]: I0216 17:24:15.447522 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 17:24:15 crc kubenswrapper[4870]: I0216 17:24:15.453565 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 17:24:15 crc kubenswrapper[4870]: I0216 17:24:15.454428 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 17:24:18 crc kubenswrapper[4870]: I0216 17:24:18.212583 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 17:24:18 crc kubenswrapper[4870]: I0216 17:24:18.215045 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 17:24:18 crc kubenswrapper[4870]: I0216 17:24:18.220024 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 17:24:19 crc kubenswrapper[4870]: I0216 17:24:19.016325 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 17:24:20 crc kubenswrapper[4870]: E0216 17:24:20.226881 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:24:33 crc kubenswrapper[4870]: E0216 17:24:33.224795 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:24:35 crc kubenswrapper[4870]: I0216 17:24:35.367102 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:24:35 crc kubenswrapper[4870]: I0216 17:24:35.367822 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:24:48 crc kubenswrapper[4870]: E0216 17:24:48.226354 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:25:02 crc kubenswrapper[4870]: E0216 17:25:02.225775 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:25:05 crc kubenswrapper[4870]: I0216 17:25:05.366646 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:25:05 crc kubenswrapper[4870]: I0216 17:25:05.367229 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:25:05 crc kubenswrapper[4870]: I0216 17:25:05.367295 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:25:05 crc kubenswrapper[4870]: I0216 17:25:05.368157 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6"} pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:25:05 crc kubenswrapper[4870]: I0216 17:25:05.368229 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" containerID="cri-o://c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" gracePeriod=600 Feb 16 17:25:05 crc kubenswrapper[4870]: E0216 17:25:05.490302 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:25:05 crc kubenswrapper[4870]: I0216 17:25:05.563110 4870 generic.go:334] "Generic (PLEG): container finished" podID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" exitCode=0 Feb 16 17:25:05 crc kubenswrapper[4870]: I0216 17:25:05.563153 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerDied","Data":"c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6"} Feb 16 17:25:05 crc kubenswrapper[4870]: I0216 17:25:05.563191 4870 scope.go:117] "RemoveContainer" containerID="a26cade4c570777b8e6874ae4e148783c7ff0c66ca799ca6a024730b89056882" Feb 16 17:25:05 crc kubenswrapper[4870]: I0216 17:25:05.563906 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:25:05 crc kubenswrapper[4870]: E0216 17:25:05.564333 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:25:16 crc kubenswrapper[4870]: E0216 17:25:16.233460 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:25:19 crc kubenswrapper[4870]: I0216 17:25:19.223699 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:25:19 crc kubenswrapper[4870]: E0216 17:25:19.224480 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:25:29 crc kubenswrapper[4870]: E0216 17:25:29.225550 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:25:30 crc kubenswrapper[4870]: I0216 17:25:30.223436 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:25:30 crc kubenswrapper[4870]: E0216 17:25:30.223889 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:25:42 crc kubenswrapper[4870]: I0216 17:25:42.224273 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:25:42 crc kubenswrapper[4870]: E0216 17:25:42.225269 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:25:44 crc kubenswrapper[4870]: E0216 17:25:44.229406 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:25:57 crc kubenswrapper[4870]: I0216 17:25:57.224927 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:25:57 crc kubenswrapper[4870]: E0216 17:25:57.225544 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:25:59 crc kubenswrapper[4870]: E0216 17:25:59.225460 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:26:01 crc kubenswrapper[4870]: I0216 17:26:01.527101 4870 scope.go:117] "RemoveContainer" containerID="6fdf539a948da90ac3342d234a7d1aadab12d61bccaf87a1f85aa7b9d53b518a" Feb 16 17:26:09 crc kubenswrapper[4870]: I0216 17:26:09.549133 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4nfwp"] Feb 16 17:26:09 crc kubenswrapper[4870]: I0216 17:26:09.552275 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4nfwp" Feb 16 17:26:09 crc kubenswrapper[4870]: I0216 17:26:09.574452 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4nfwp"] Feb 16 17:26:09 crc kubenswrapper[4870]: I0216 17:26:09.722391 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/174c13b7-90e9-4dc1-809a-bf01e29e0261-utilities\") pod \"redhat-marketplace-4nfwp\" (UID: \"174c13b7-90e9-4dc1-809a-bf01e29e0261\") " pod="openshift-marketplace/redhat-marketplace-4nfwp" Feb 16 17:26:09 crc kubenswrapper[4870]: I0216 17:26:09.722765 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/174c13b7-90e9-4dc1-809a-bf01e29e0261-catalog-content\") pod \"redhat-marketplace-4nfwp\" (UID: \"174c13b7-90e9-4dc1-809a-bf01e29e0261\") " pod="openshift-marketplace/redhat-marketplace-4nfwp" Feb 16 17:26:09 crc kubenswrapper[4870]: I0216 17:26:09.723544 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwhnt\" (UniqueName: \"kubernetes.io/projected/174c13b7-90e9-4dc1-809a-bf01e29e0261-kube-api-access-nwhnt\") pod \"redhat-marketplace-4nfwp\" (UID: \"174c13b7-90e9-4dc1-809a-bf01e29e0261\") " pod="openshift-marketplace/redhat-marketplace-4nfwp" Feb 16 17:26:09 crc kubenswrapper[4870]: I0216 17:26:09.826212 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwhnt\" (UniqueName: \"kubernetes.io/projected/174c13b7-90e9-4dc1-809a-bf01e29e0261-kube-api-access-nwhnt\") pod \"redhat-marketplace-4nfwp\" (UID: \"174c13b7-90e9-4dc1-809a-bf01e29e0261\") " pod="openshift-marketplace/redhat-marketplace-4nfwp" Feb 16 17:26:09 crc kubenswrapper[4870]: I0216 17:26:09.826285 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/174c13b7-90e9-4dc1-809a-bf01e29e0261-utilities\") pod \"redhat-marketplace-4nfwp\" (UID: \"174c13b7-90e9-4dc1-809a-bf01e29e0261\") " pod="openshift-marketplace/redhat-marketplace-4nfwp" Feb 16 17:26:09 crc kubenswrapper[4870]: I0216 17:26:09.826305 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/174c13b7-90e9-4dc1-809a-bf01e29e0261-catalog-content\") pod \"redhat-marketplace-4nfwp\" (UID: \"174c13b7-90e9-4dc1-809a-bf01e29e0261\") " pod="openshift-marketplace/redhat-marketplace-4nfwp" Feb 16 17:26:09 crc kubenswrapper[4870]: I0216 17:26:09.826738 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/174c13b7-90e9-4dc1-809a-bf01e29e0261-utilities\") pod \"redhat-marketplace-4nfwp\" (UID: \"174c13b7-90e9-4dc1-809a-bf01e29e0261\") " pod="openshift-marketplace/redhat-marketplace-4nfwp" Feb 16 17:26:09 crc kubenswrapper[4870]: I0216 17:26:09.826761 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/174c13b7-90e9-4dc1-809a-bf01e29e0261-catalog-content\") pod \"redhat-marketplace-4nfwp\" (UID: \"174c13b7-90e9-4dc1-809a-bf01e29e0261\") " pod="openshift-marketplace/redhat-marketplace-4nfwp" Feb 16 17:26:09 crc kubenswrapper[4870]: I0216 17:26:09.854233 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwhnt\" (UniqueName: \"kubernetes.io/projected/174c13b7-90e9-4dc1-809a-bf01e29e0261-kube-api-access-nwhnt\") pod \"redhat-marketplace-4nfwp\" (UID: \"174c13b7-90e9-4dc1-809a-bf01e29e0261\") " pod="openshift-marketplace/redhat-marketplace-4nfwp" Feb 16 17:26:09 crc kubenswrapper[4870]: I0216 17:26:09.878043 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4nfwp" Feb 16 17:26:10 crc kubenswrapper[4870]: I0216 17:26:10.437576 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4nfwp"] Feb 16 17:26:10 crc kubenswrapper[4870]: W0216 17:26:10.448128 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod174c13b7_90e9_4dc1_809a_bf01e29e0261.slice/crio-442fc746b090532ec3a5fb47b681a2a4ccfdc829357329f42083f4ff4487f6bc WatchSource:0}: Error finding container 442fc746b090532ec3a5fb47b681a2a4ccfdc829357329f42083f4ff4487f6bc: Status 404 returned error can't find the container with id 442fc746b090532ec3a5fb47b681a2a4ccfdc829357329f42083f4ff4487f6bc Feb 16 17:26:11 crc kubenswrapper[4870]: I0216 17:26:11.305714 4870 generic.go:334] "Generic (PLEG): container finished" podID="174c13b7-90e9-4dc1-809a-bf01e29e0261" containerID="cf85d464d1cdbfb78ec579958e8c07b4915757ba739613c27662ea9f2a019720" exitCode=0 Feb 16 17:26:11 crc kubenswrapper[4870]: I0216 17:26:11.305795 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4nfwp" event={"ID":"174c13b7-90e9-4dc1-809a-bf01e29e0261","Type":"ContainerDied","Data":"cf85d464d1cdbfb78ec579958e8c07b4915757ba739613c27662ea9f2a019720"} Feb 16 17:26:11 crc kubenswrapper[4870]: I0216 17:26:11.306179 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4nfwp" event={"ID":"174c13b7-90e9-4dc1-809a-bf01e29e0261","Type":"ContainerStarted","Data":"442fc746b090532ec3a5fb47b681a2a4ccfdc829357329f42083f4ff4487f6bc"} Feb 16 17:26:12 crc kubenswrapper[4870]: I0216 17:26:12.223475 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:26:12 crc kubenswrapper[4870]: E0216 17:26:12.224018 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:26:13 crc kubenswrapper[4870]: I0216 17:26:13.343457 4870 generic.go:334] "Generic (PLEG): container finished" podID="174c13b7-90e9-4dc1-809a-bf01e29e0261" containerID="8a664cae55101124eff4d65e45f40f5e2e4af3f3f6500c8f12cc8d1a91e1e0fb" exitCode=0 Feb 16 17:26:13 crc kubenswrapper[4870]: I0216 17:26:13.343522 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4nfwp" event={"ID":"174c13b7-90e9-4dc1-809a-bf01e29e0261","Type":"ContainerDied","Data":"8a664cae55101124eff4d65e45f40f5e2e4af3f3f6500c8f12cc8d1a91e1e0fb"} Feb 16 17:26:13 crc kubenswrapper[4870]: E0216 17:26:13.360205 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:26:13 crc kubenswrapper[4870]: E0216 17:26:13.360276 4870 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:26:13 crc kubenswrapper[4870]: E0216 17:26:13.360441 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7b2p7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6hkdm_openstack(34a86750-1fff-4add-8462-7ab805ec7f89): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:26:13 crc kubenswrapper[4870]: E0216 17:26:13.361652 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:26:14 crc kubenswrapper[4870]: I0216 17:26:14.359237 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4nfwp" event={"ID":"174c13b7-90e9-4dc1-809a-bf01e29e0261","Type":"ContainerStarted","Data":"3c863403133d01fdfc56727ea06784e076b2819fb76ff17d070386fd28413efc"} Feb 16 17:26:14 crc kubenswrapper[4870]: I0216 17:26:14.393569 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4nfwp" podStartSLOduration=2.659989946 podStartE2EDuration="5.393544792s" podCreationTimestamp="2026-02-16 17:26:09 +0000 UTC" firstStartedPulling="2026-02-16 17:26:11.308169024 +0000 UTC m=+1575.791633408" lastFinishedPulling="2026-02-16 17:26:14.04172387 +0000 UTC m=+1578.525188254" observedRunningTime="2026-02-16 17:26:14.385710408 +0000 UTC m=+1578.869174792" watchObservedRunningTime="2026-02-16 17:26:14.393544792 +0000 UTC m=+1578.877009176" Feb 16 17:26:19 crc kubenswrapper[4870]: I0216 17:26:19.878378 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4nfwp" Feb 16 17:26:19 crc kubenswrapper[4870]: I0216 17:26:19.879010 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4nfwp" Feb 16 17:26:19 crc kubenswrapper[4870]: I0216 17:26:19.954317 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4nfwp" Feb 16 17:26:20 crc kubenswrapper[4870]: I0216 17:26:20.468974 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4nfwp" Feb 16 17:26:20 crc kubenswrapper[4870]: I0216 17:26:20.529246 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4nfwp"] Feb 16 17:26:22 crc kubenswrapper[4870]: I0216 17:26:22.444060 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4nfwp" podUID="174c13b7-90e9-4dc1-809a-bf01e29e0261" containerName="registry-server" containerID="cri-o://3c863403133d01fdfc56727ea06784e076b2819fb76ff17d070386fd28413efc" gracePeriod=2 Feb 16 17:26:22 crc kubenswrapper[4870]: I0216 17:26:22.991674 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4nfwp" Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.107130 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/174c13b7-90e9-4dc1-809a-bf01e29e0261-utilities\") pod \"174c13b7-90e9-4dc1-809a-bf01e29e0261\" (UID: \"174c13b7-90e9-4dc1-809a-bf01e29e0261\") " Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.107293 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/174c13b7-90e9-4dc1-809a-bf01e29e0261-catalog-content\") pod \"174c13b7-90e9-4dc1-809a-bf01e29e0261\" (UID: \"174c13b7-90e9-4dc1-809a-bf01e29e0261\") " Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.107367 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwhnt\" (UniqueName: \"kubernetes.io/projected/174c13b7-90e9-4dc1-809a-bf01e29e0261-kube-api-access-nwhnt\") pod \"174c13b7-90e9-4dc1-809a-bf01e29e0261\" (UID: \"174c13b7-90e9-4dc1-809a-bf01e29e0261\") " Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.107907 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/174c13b7-90e9-4dc1-809a-bf01e29e0261-utilities" (OuterVolumeSpecName: "utilities") pod "174c13b7-90e9-4dc1-809a-bf01e29e0261" (UID: "174c13b7-90e9-4dc1-809a-bf01e29e0261"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.114165 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/174c13b7-90e9-4dc1-809a-bf01e29e0261-kube-api-access-nwhnt" (OuterVolumeSpecName: "kube-api-access-nwhnt") pod "174c13b7-90e9-4dc1-809a-bf01e29e0261" (UID: "174c13b7-90e9-4dc1-809a-bf01e29e0261"). InnerVolumeSpecName "kube-api-access-nwhnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.130356 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/174c13b7-90e9-4dc1-809a-bf01e29e0261-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "174c13b7-90e9-4dc1-809a-bf01e29e0261" (UID: "174c13b7-90e9-4dc1-809a-bf01e29e0261"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.210426 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/174c13b7-90e9-4dc1-809a-bf01e29e0261-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.210456 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/174c13b7-90e9-4dc1-809a-bf01e29e0261-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.210471 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwhnt\" (UniqueName: \"kubernetes.io/projected/174c13b7-90e9-4dc1-809a-bf01e29e0261-kube-api-access-nwhnt\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.455610 4870 generic.go:334] "Generic (PLEG): container finished" podID="174c13b7-90e9-4dc1-809a-bf01e29e0261" containerID="3c863403133d01fdfc56727ea06784e076b2819fb76ff17d070386fd28413efc" exitCode=0 Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.455679 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4nfwp" event={"ID":"174c13b7-90e9-4dc1-809a-bf01e29e0261","Type":"ContainerDied","Data":"3c863403133d01fdfc56727ea06784e076b2819fb76ff17d070386fd28413efc"} Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.456032 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4nfwp" event={"ID":"174c13b7-90e9-4dc1-809a-bf01e29e0261","Type":"ContainerDied","Data":"442fc746b090532ec3a5fb47b681a2a4ccfdc829357329f42083f4ff4487f6bc"} Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.456058 4870 scope.go:117] "RemoveContainer" containerID="3c863403133d01fdfc56727ea06784e076b2819fb76ff17d070386fd28413efc" Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.455693 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4nfwp" Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.507844 4870 scope.go:117] "RemoveContainer" containerID="8a664cae55101124eff4d65e45f40f5e2e4af3f3f6500c8f12cc8d1a91e1e0fb" Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.536287 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4nfwp"] Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.571605 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4nfwp"] Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.587840 4870 scope.go:117] "RemoveContainer" containerID="cf85d464d1cdbfb78ec579958e8c07b4915757ba739613c27662ea9f2a019720" Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.656086 4870 scope.go:117] "RemoveContainer" containerID="3c863403133d01fdfc56727ea06784e076b2819fb76ff17d070386fd28413efc" Feb 16 17:26:23 crc kubenswrapper[4870]: E0216 17:26:23.656607 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c863403133d01fdfc56727ea06784e076b2819fb76ff17d070386fd28413efc\": container with ID starting with 3c863403133d01fdfc56727ea06784e076b2819fb76ff17d070386fd28413efc not found: ID does not exist" containerID="3c863403133d01fdfc56727ea06784e076b2819fb76ff17d070386fd28413efc" Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.656663 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c863403133d01fdfc56727ea06784e076b2819fb76ff17d070386fd28413efc"} err="failed to get container status \"3c863403133d01fdfc56727ea06784e076b2819fb76ff17d070386fd28413efc\": rpc error: code = NotFound desc = could not find container \"3c863403133d01fdfc56727ea06784e076b2819fb76ff17d070386fd28413efc\": container with ID starting with 3c863403133d01fdfc56727ea06784e076b2819fb76ff17d070386fd28413efc not found: ID does not exist" Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.656699 4870 scope.go:117] "RemoveContainer" containerID="8a664cae55101124eff4d65e45f40f5e2e4af3f3f6500c8f12cc8d1a91e1e0fb" Feb 16 17:26:23 crc kubenswrapper[4870]: E0216 17:26:23.657085 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a664cae55101124eff4d65e45f40f5e2e4af3f3f6500c8f12cc8d1a91e1e0fb\": container with ID starting with 8a664cae55101124eff4d65e45f40f5e2e4af3f3f6500c8f12cc8d1a91e1e0fb not found: ID does not exist" containerID="8a664cae55101124eff4d65e45f40f5e2e4af3f3f6500c8f12cc8d1a91e1e0fb" Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.657140 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a664cae55101124eff4d65e45f40f5e2e4af3f3f6500c8f12cc8d1a91e1e0fb"} err="failed to get container status \"8a664cae55101124eff4d65e45f40f5e2e4af3f3f6500c8f12cc8d1a91e1e0fb\": rpc error: code = NotFound desc = could not find container \"8a664cae55101124eff4d65e45f40f5e2e4af3f3f6500c8f12cc8d1a91e1e0fb\": container with ID starting with 8a664cae55101124eff4d65e45f40f5e2e4af3f3f6500c8f12cc8d1a91e1e0fb not found: ID does not exist" Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.657175 4870 scope.go:117] "RemoveContainer" containerID="cf85d464d1cdbfb78ec579958e8c07b4915757ba739613c27662ea9f2a019720" Feb 16 17:26:23 crc kubenswrapper[4870]: E0216 17:26:23.659729 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf85d464d1cdbfb78ec579958e8c07b4915757ba739613c27662ea9f2a019720\": container with ID starting with cf85d464d1cdbfb78ec579958e8c07b4915757ba739613c27662ea9f2a019720 not found: ID does not exist" containerID="cf85d464d1cdbfb78ec579958e8c07b4915757ba739613c27662ea9f2a019720" Feb 16 17:26:23 crc kubenswrapper[4870]: I0216 17:26:23.659770 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf85d464d1cdbfb78ec579958e8c07b4915757ba739613c27662ea9f2a019720"} err="failed to get container status \"cf85d464d1cdbfb78ec579958e8c07b4915757ba739613c27662ea9f2a019720\": rpc error: code = NotFound desc = could not find container \"cf85d464d1cdbfb78ec579958e8c07b4915757ba739613c27662ea9f2a019720\": container with ID starting with cf85d464d1cdbfb78ec579958e8c07b4915757ba739613c27662ea9f2a019720 not found: ID does not exist" Feb 16 17:26:24 crc kubenswrapper[4870]: I0216 17:26:24.235016 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="174c13b7-90e9-4dc1-809a-bf01e29e0261" path="/var/lib/kubelet/pods/174c13b7-90e9-4dc1-809a-bf01e29e0261/volumes" Feb 16 17:26:26 crc kubenswrapper[4870]: E0216 17:26:26.233940 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:26:27 crc kubenswrapper[4870]: I0216 17:26:27.222834 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:26:27 crc kubenswrapper[4870]: E0216 17:26:27.223136 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:26:29 crc kubenswrapper[4870]: I0216 17:26:29.092997 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qjdwl"] Feb 16 17:26:29 crc kubenswrapper[4870]: E0216 17:26:29.095187 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174c13b7-90e9-4dc1-809a-bf01e29e0261" containerName="extract-utilities" Feb 16 17:26:29 crc kubenswrapper[4870]: I0216 17:26:29.095375 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="174c13b7-90e9-4dc1-809a-bf01e29e0261" containerName="extract-utilities" Feb 16 17:26:29 crc kubenswrapper[4870]: E0216 17:26:29.095537 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174c13b7-90e9-4dc1-809a-bf01e29e0261" containerName="registry-server" Feb 16 17:26:29 crc kubenswrapper[4870]: I0216 17:26:29.095670 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="174c13b7-90e9-4dc1-809a-bf01e29e0261" containerName="registry-server" Feb 16 17:26:29 crc kubenswrapper[4870]: E0216 17:26:29.095816 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174c13b7-90e9-4dc1-809a-bf01e29e0261" containerName="extract-content" Feb 16 17:26:29 crc kubenswrapper[4870]: I0216 17:26:29.095939 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="174c13b7-90e9-4dc1-809a-bf01e29e0261" containerName="extract-content" Feb 16 17:26:29 crc kubenswrapper[4870]: I0216 17:26:29.096441 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="174c13b7-90e9-4dc1-809a-bf01e29e0261" containerName="registry-server" Feb 16 17:26:29 crc kubenswrapper[4870]: I0216 17:26:29.099131 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjdwl" Feb 16 17:26:29 crc kubenswrapper[4870]: I0216 17:26:29.104421 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qjdwl"] Feb 16 17:26:29 crc kubenswrapper[4870]: I0216 17:26:29.233180 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c22562b-6b5d-40df-a657-22b4a29b6cc0-catalog-content\") pod \"certified-operators-qjdwl\" (UID: \"7c22562b-6b5d-40df-a657-22b4a29b6cc0\") " pod="openshift-marketplace/certified-operators-qjdwl" Feb 16 17:26:29 crc kubenswrapper[4870]: I0216 17:26:29.233235 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgswg\" (UniqueName: \"kubernetes.io/projected/7c22562b-6b5d-40df-a657-22b4a29b6cc0-kube-api-access-mgswg\") pod \"certified-operators-qjdwl\" (UID: \"7c22562b-6b5d-40df-a657-22b4a29b6cc0\") " pod="openshift-marketplace/certified-operators-qjdwl" Feb 16 17:26:29 crc kubenswrapper[4870]: I0216 17:26:29.233285 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c22562b-6b5d-40df-a657-22b4a29b6cc0-utilities\") pod \"certified-operators-qjdwl\" (UID: \"7c22562b-6b5d-40df-a657-22b4a29b6cc0\") " pod="openshift-marketplace/certified-operators-qjdwl" Feb 16 17:26:29 crc kubenswrapper[4870]: I0216 17:26:29.335181 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c22562b-6b5d-40df-a657-22b4a29b6cc0-catalog-content\") pod \"certified-operators-qjdwl\" (UID: \"7c22562b-6b5d-40df-a657-22b4a29b6cc0\") " pod="openshift-marketplace/certified-operators-qjdwl" Feb 16 17:26:29 crc kubenswrapper[4870]: I0216 17:26:29.335248 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgswg\" (UniqueName: \"kubernetes.io/projected/7c22562b-6b5d-40df-a657-22b4a29b6cc0-kube-api-access-mgswg\") pod \"certified-operators-qjdwl\" (UID: \"7c22562b-6b5d-40df-a657-22b4a29b6cc0\") " pod="openshift-marketplace/certified-operators-qjdwl" Feb 16 17:26:29 crc kubenswrapper[4870]: I0216 17:26:29.335300 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c22562b-6b5d-40df-a657-22b4a29b6cc0-utilities\") pod \"certified-operators-qjdwl\" (UID: \"7c22562b-6b5d-40df-a657-22b4a29b6cc0\") " pod="openshift-marketplace/certified-operators-qjdwl" Feb 16 17:26:29 crc kubenswrapper[4870]: I0216 17:26:29.335864 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c22562b-6b5d-40df-a657-22b4a29b6cc0-catalog-content\") pod \"certified-operators-qjdwl\" (UID: \"7c22562b-6b5d-40df-a657-22b4a29b6cc0\") " pod="openshift-marketplace/certified-operators-qjdwl" Feb 16 17:26:29 crc kubenswrapper[4870]: I0216 17:26:29.335876 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c22562b-6b5d-40df-a657-22b4a29b6cc0-utilities\") pod \"certified-operators-qjdwl\" (UID: \"7c22562b-6b5d-40df-a657-22b4a29b6cc0\") " pod="openshift-marketplace/certified-operators-qjdwl" Feb 16 17:26:29 crc kubenswrapper[4870]: I0216 17:26:29.354711 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgswg\" (UniqueName: \"kubernetes.io/projected/7c22562b-6b5d-40df-a657-22b4a29b6cc0-kube-api-access-mgswg\") pod \"certified-operators-qjdwl\" (UID: \"7c22562b-6b5d-40df-a657-22b4a29b6cc0\") " pod="openshift-marketplace/certified-operators-qjdwl" Feb 16 17:26:29 crc kubenswrapper[4870]: I0216 17:26:29.427659 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjdwl" Feb 16 17:26:29 crc kubenswrapper[4870]: I0216 17:26:29.936978 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qjdwl"] Feb 16 17:26:30 crc kubenswrapper[4870]: I0216 17:26:30.588878 4870 generic.go:334] "Generic (PLEG): container finished" podID="7c22562b-6b5d-40df-a657-22b4a29b6cc0" containerID="8b15c28db911d8d231b589e82283bf13a1e6e7dc6cfb931cd2fd952650f167a0" exitCode=0 Feb 16 17:26:30 crc kubenswrapper[4870]: I0216 17:26:30.588911 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjdwl" event={"ID":"7c22562b-6b5d-40df-a657-22b4a29b6cc0","Type":"ContainerDied","Data":"8b15c28db911d8d231b589e82283bf13a1e6e7dc6cfb931cd2fd952650f167a0"} Feb 16 17:26:30 crc kubenswrapper[4870]: I0216 17:26:30.588968 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjdwl" event={"ID":"7c22562b-6b5d-40df-a657-22b4a29b6cc0","Type":"ContainerStarted","Data":"89319bc4796a3fdb4d756c7c1b6f97b93c6e4b6057642c2f001a7b9102d7f35c"} Feb 16 17:26:31 crc kubenswrapper[4870]: I0216 17:26:31.602812 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjdwl" event={"ID":"7c22562b-6b5d-40df-a657-22b4a29b6cc0","Type":"ContainerStarted","Data":"f75ef27d1fe943c3eed1349f0e20212abd3715173cbf13bc16a6217891e17e57"} Feb 16 17:26:32 crc kubenswrapper[4870]: I0216 17:26:32.617840 4870 generic.go:334] "Generic (PLEG): container finished" podID="7c22562b-6b5d-40df-a657-22b4a29b6cc0" containerID="f75ef27d1fe943c3eed1349f0e20212abd3715173cbf13bc16a6217891e17e57" exitCode=0 Feb 16 17:26:32 crc kubenswrapper[4870]: I0216 17:26:32.618087 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjdwl" event={"ID":"7c22562b-6b5d-40df-a657-22b4a29b6cc0","Type":"ContainerDied","Data":"f75ef27d1fe943c3eed1349f0e20212abd3715173cbf13bc16a6217891e17e57"} Feb 16 17:26:33 crc kubenswrapper[4870]: I0216 17:26:33.632129 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjdwl" event={"ID":"7c22562b-6b5d-40df-a657-22b4a29b6cc0","Type":"ContainerStarted","Data":"e97e6bbc4acf1a2435fa2cda5229561a04aad3f0e9d270b6649a5f88cea37214"} Feb 16 17:26:33 crc kubenswrapper[4870]: I0216 17:26:33.650034 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qjdwl" podStartSLOduration=2.25403674 podStartE2EDuration="4.650007821s" podCreationTimestamp="2026-02-16 17:26:29 +0000 UTC" firstStartedPulling="2026-02-16 17:26:30.591715418 +0000 UTC m=+1595.075179812" lastFinishedPulling="2026-02-16 17:26:32.987686519 +0000 UTC m=+1597.471150893" observedRunningTime="2026-02-16 17:26:33.647469479 +0000 UTC m=+1598.130933863" watchObservedRunningTime="2026-02-16 17:26:33.650007821 +0000 UTC m=+1598.133472225" Feb 16 17:26:39 crc kubenswrapper[4870]: I0216 17:26:39.428356 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qjdwl" Feb 16 17:26:39 crc kubenswrapper[4870]: I0216 17:26:39.428962 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qjdwl" Feb 16 17:26:39 crc kubenswrapper[4870]: I0216 17:26:39.489826 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qjdwl" Feb 16 17:26:39 crc kubenswrapper[4870]: I0216 17:26:39.757162 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qjdwl" Feb 16 17:26:40 crc kubenswrapper[4870]: I0216 17:26:40.739151 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qjdwl"] Feb 16 17:26:41 crc kubenswrapper[4870]: I0216 17:26:41.223586 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:26:41 crc kubenswrapper[4870]: E0216 17:26:41.224144 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:26:41 crc kubenswrapper[4870]: E0216 17:26:41.225389 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:26:41 crc kubenswrapper[4870]: I0216 17:26:41.721758 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qjdwl" podUID="7c22562b-6b5d-40df-a657-22b4a29b6cc0" containerName="registry-server" containerID="cri-o://e97e6bbc4acf1a2435fa2cda5229561a04aad3f0e9d270b6649a5f88cea37214" gracePeriod=2 Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.725823 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjdwl" Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.732643 4870 generic.go:334] "Generic (PLEG): container finished" podID="7c22562b-6b5d-40df-a657-22b4a29b6cc0" containerID="e97e6bbc4acf1a2435fa2cda5229561a04aad3f0e9d270b6649a5f88cea37214" exitCode=0 Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.732681 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjdwl" event={"ID":"7c22562b-6b5d-40df-a657-22b4a29b6cc0","Type":"ContainerDied","Data":"e97e6bbc4acf1a2435fa2cda5229561a04aad3f0e9d270b6649a5f88cea37214"} Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.732705 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjdwl" event={"ID":"7c22562b-6b5d-40df-a657-22b4a29b6cc0","Type":"ContainerDied","Data":"89319bc4796a3fdb4d756c7c1b6f97b93c6e4b6057642c2f001a7b9102d7f35c"} Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.732723 4870 scope.go:117] "RemoveContainer" containerID="e97e6bbc4acf1a2435fa2cda5229561a04aad3f0e9d270b6649a5f88cea37214" Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.732724 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjdwl" Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.754740 4870 scope.go:117] "RemoveContainer" containerID="f75ef27d1fe943c3eed1349f0e20212abd3715173cbf13bc16a6217891e17e57" Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.778836 4870 scope.go:117] "RemoveContainer" containerID="8b15c28db911d8d231b589e82283bf13a1e6e7dc6cfb931cd2fd952650f167a0" Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.827236 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c22562b-6b5d-40df-a657-22b4a29b6cc0-catalog-content\") pod \"7c22562b-6b5d-40df-a657-22b4a29b6cc0\" (UID: \"7c22562b-6b5d-40df-a657-22b4a29b6cc0\") " Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.827328 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c22562b-6b5d-40df-a657-22b4a29b6cc0-utilities\") pod \"7c22562b-6b5d-40df-a657-22b4a29b6cc0\" (UID: \"7c22562b-6b5d-40df-a657-22b4a29b6cc0\") " Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.827434 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgswg\" (UniqueName: \"kubernetes.io/projected/7c22562b-6b5d-40df-a657-22b4a29b6cc0-kube-api-access-mgswg\") pod \"7c22562b-6b5d-40df-a657-22b4a29b6cc0\" (UID: \"7c22562b-6b5d-40df-a657-22b4a29b6cc0\") " Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.828259 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c22562b-6b5d-40df-a657-22b4a29b6cc0-utilities" (OuterVolumeSpecName: "utilities") pod "7c22562b-6b5d-40df-a657-22b4a29b6cc0" (UID: "7c22562b-6b5d-40df-a657-22b4a29b6cc0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.833477 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c22562b-6b5d-40df-a657-22b4a29b6cc0-kube-api-access-mgswg" (OuterVolumeSpecName: "kube-api-access-mgswg") pod "7c22562b-6b5d-40df-a657-22b4a29b6cc0" (UID: "7c22562b-6b5d-40df-a657-22b4a29b6cc0"). InnerVolumeSpecName "kube-api-access-mgswg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.844376 4870 scope.go:117] "RemoveContainer" containerID="e97e6bbc4acf1a2435fa2cda5229561a04aad3f0e9d270b6649a5f88cea37214" Feb 16 17:26:42 crc kubenswrapper[4870]: E0216 17:26:42.845006 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e97e6bbc4acf1a2435fa2cda5229561a04aad3f0e9d270b6649a5f88cea37214\": container with ID starting with e97e6bbc4acf1a2435fa2cda5229561a04aad3f0e9d270b6649a5f88cea37214 not found: ID does not exist" containerID="e97e6bbc4acf1a2435fa2cda5229561a04aad3f0e9d270b6649a5f88cea37214" Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.845058 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e97e6bbc4acf1a2435fa2cda5229561a04aad3f0e9d270b6649a5f88cea37214"} err="failed to get container status \"e97e6bbc4acf1a2435fa2cda5229561a04aad3f0e9d270b6649a5f88cea37214\": rpc error: code = NotFound desc = could not find container \"e97e6bbc4acf1a2435fa2cda5229561a04aad3f0e9d270b6649a5f88cea37214\": container with ID starting with e97e6bbc4acf1a2435fa2cda5229561a04aad3f0e9d270b6649a5f88cea37214 not found: ID does not exist" Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.845088 4870 scope.go:117] "RemoveContainer" containerID="f75ef27d1fe943c3eed1349f0e20212abd3715173cbf13bc16a6217891e17e57" Feb 16 17:26:42 crc kubenswrapper[4870]: E0216 17:26:42.845398 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f75ef27d1fe943c3eed1349f0e20212abd3715173cbf13bc16a6217891e17e57\": container with ID starting with f75ef27d1fe943c3eed1349f0e20212abd3715173cbf13bc16a6217891e17e57 not found: ID does not exist" containerID="f75ef27d1fe943c3eed1349f0e20212abd3715173cbf13bc16a6217891e17e57" Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.845440 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f75ef27d1fe943c3eed1349f0e20212abd3715173cbf13bc16a6217891e17e57"} err="failed to get container status \"f75ef27d1fe943c3eed1349f0e20212abd3715173cbf13bc16a6217891e17e57\": rpc error: code = NotFound desc = could not find container \"f75ef27d1fe943c3eed1349f0e20212abd3715173cbf13bc16a6217891e17e57\": container with ID starting with f75ef27d1fe943c3eed1349f0e20212abd3715173cbf13bc16a6217891e17e57 not found: ID does not exist" Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.845467 4870 scope.go:117] "RemoveContainer" containerID="8b15c28db911d8d231b589e82283bf13a1e6e7dc6cfb931cd2fd952650f167a0" Feb 16 17:26:42 crc kubenswrapper[4870]: E0216 17:26:42.845684 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b15c28db911d8d231b589e82283bf13a1e6e7dc6cfb931cd2fd952650f167a0\": container with ID starting with 8b15c28db911d8d231b589e82283bf13a1e6e7dc6cfb931cd2fd952650f167a0 not found: ID does not exist" containerID="8b15c28db911d8d231b589e82283bf13a1e6e7dc6cfb931cd2fd952650f167a0" Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.845719 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b15c28db911d8d231b589e82283bf13a1e6e7dc6cfb931cd2fd952650f167a0"} err="failed to get container status \"8b15c28db911d8d231b589e82283bf13a1e6e7dc6cfb931cd2fd952650f167a0\": rpc error: code = NotFound desc = could not find container \"8b15c28db911d8d231b589e82283bf13a1e6e7dc6cfb931cd2fd952650f167a0\": container with ID starting with 8b15c28db911d8d231b589e82283bf13a1e6e7dc6cfb931cd2fd952650f167a0 not found: ID does not exist" Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.881428 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c22562b-6b5d-40df-a657-22b4a29b6cc0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c22562b-6b5d-40df-a657-22b4a29b6cc0" (UID: "7c22562b-6b5d-40df-a657-22b4a29b6cc0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.929859 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c22562b-6b5d-40df-a657-22b4a29b6cc0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.929900 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c22562b-6b5d-40df-a657-22b4a29b6cc0-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:42 crc kubenswrapper[4870]: I0216 17:26:42.929910 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgswg\" (UniqueName: \"kubernetes.io/projected/7c22562b-6b5d-40df-a657-22b4a29b6cc0-kube-api-access-mgswg\") on node \"crc\" DevicePath \"\"" Feb 16 17:26:43 crc kubenswrapper[4870]: I0216 17:26:43.072652 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qjdwl"] Feb 16 17:26:43 crc kubenswrapper[4870]: I0216 17:26:43.081213 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qjdwl"] Feb 16 17:26:44 crc kubenswrapper[4870]: I0216 17:26:44.255761 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c22562b-6b5d-40df-a657-22b4a29b6cc0" path="/var/lib/kubelet/pods/7c22562b-6b5d-40df-a657-22b4a29b6cc0/volumes" Feb 16 17:26:54 crc kubenswrapper[4870]: E0216 17:26:54.225757 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:26:56 crc kubenswrapper[4870]: I0216 17:26:56.233432 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:26:56 crc kubenswrapper[4870]: E0216 17:26:56.234582 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:27:01 crc kubenswrapper[4870]: I0216 17:27:01.605562 4870 scope.go:117] "RemoveContainer" containerID="3f8171683589800873f30d0092fa287a67395ef35b531b5c45043adbdd173150" Feb 16 17:27:01 crc kubenswrapper[4870]: I0216 17:27:01.671276 4870 scope.go:117] "RemoveContainer" containerID="17b0f62ad0b46c0568574da2a40168c409f3f6bfae5dfbb09cdd75a76e196661" Feb 16 17:27:01 crc kubenswrapper[4870]: I0216 17:27:01.726184 4870 scope.go:117] "RemoveContainer" containerID="70f68f4e42fe70ef1a7b5803bbda57bae816fb675efd42da2e3b194020e7318a" Feb 16 17:27:06 crc kubenswrapper[4870]: E0216 17:27:06.240590 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:27:10 crc kubenswrapper[4870]: I0216 17:27:10.223413 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:27:10 crc kubenswrapper[4870]: E0216 17:27:10.224228 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:27:18 crc kubenswrapper[4870]: E0216 17:27:18.226962 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:27:23 crc kubenswrapper[4870]: I0216 17:27:23.223285 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:27:23 crc kubenswrapper[4870]: E0216 17:27:23.224304 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:27:29 crc kubenswrapper[4870]: E0216 17:27:29.226047 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:27:37 crc kubenswrapper[4870]: I0216 17:27:37.223122 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:27:37 crc kubenswrapper[4870]: E0216 17:27:37.224058 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:27:42 crc kubenswrapper[4870]: E0216 17:27:42.226107 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:27:48 crc kubenswrapper[4870]: I0216 17:27:48.223413 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:27:48 crc kubenswrapper[4870]: E0216 17:27:48.224256 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:27:54 crc kubenswrapper[4870]: E0216 17:27:54.225167 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:28:01 crc kubenswrapper[4870]: I0216 17:28:01.888648 4870 scope.go:117] "RemoveContainer" containerID="174b76b80058494b81583b695257ebdd50654383978093ce2c861c57bc68f5cb" Feb 16 17:28:01 crc kubenswrapper[4870]: I0216 17:28:01.917412 4870 scope.go:117] "RemoveContainer" containerID="d4a81ce2df993166d092df61d9ef89ce07467e6e0f69904c8303f8d50d8733c2" Feb 16 17:28:02 crc kubenswrapper[4870]: I0216 17:28:02.223499 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:28:02 crc kubenswrapper[4870]: E0216 17:28:02.225046 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:28:06 crc kubenswrapper[4870]: E0216 17:28:06.232918 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:28:14 crc kubenswrapper[4870]: I0216 17:28:14.223736 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:28:14 crc kubenswrapper[4870]: E0216 17:28:14.225042 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:28:18 crc kubenswrapper[4870]: E0216 17:28:18.226683 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:28:25 crc kubenswrapper[4870]: I0216 17:28:25.230877 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:28:25 crc kubenswrapper[4870]: E0216 17:28:25.231569 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:28:31 crc kubenswrapper[4870]: E0216 17:28:31.785666 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:28:38 crc kubenswrapper[4870]: I0216 17:28:38.222903 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:28:38 crc kubenswrapper[4870]: E0216 17:28:38.223657 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:28:44 crc kubenswrapper[4870]: E0216 17:28:44.226690 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:28:50 crc kubenswrapper[4870]: I0216 17:28:50.222885 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:28:50 crc kubenswrapper[4870]: E0216 17:28:50.223608 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:28:55 crc kubenswrapper[4870]: E0216 17:28:55.224534 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:29:01 crc kubenswrapper[4870]: I0216 17:29:01.982884 4870 scope.go:117] "RemoveContainer" containerID="b6572a5e7419a2d305999a2b9f99298f5534e2dfc3b992dd8eafd1ecab6efcff" Feb 16 17:29:02 crc kubenswrapper[4870]: I0216 17:29:02.016595 4870 scope.go:117] "RemoveContainer" containerID="0eb45ad9b7c918f116381e1fab55847b2281cc65a4275e742ad8df0312080529" Feb 16 17:29:02 crc kubenswrapper[4870]: I0216 17:29:02.222852 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:29:02 crc kubenswrapper[4870]: E0216 17:29:02.223366 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:29:09 crc kubenswrapper[4870]: E0216 17:29:09.224770 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:29:17 crc kubenswrapper[4870]: I0216 17:29:17.223642 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:29:17 crc kubenswrapper[4870]: E0216 17:29:17.226840 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:29:22 crc kubenswrapper[4870]: E0216 17:29:22.225220 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:29:28 crc kubenswrapper[4870]: I0216 17:29:28.223291 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:29:28 crc kubenswrapper[4870]: E0216 17:29:28.224208 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:29:36 crc kubenswrapper[4870]: E0216 17:29:36.232912 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:29:40 crc kubenswrapper[4870]: I0216 17:29:40.223840 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:29:40 crc kubenswrapper[4870]: E0216 17:29:40.225137 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:29:50 crc kubenswrapper[4870]: I0216 17:29:50.073289 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-x7s6z"] Feb 16 17:29:50 crc kubenswrapper[4870]: I0216 17:29:50.091398 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-8cad-account-create-update-7qqnt"] Feb 16 17:29:50 crc kubenswrapper[4870]: I0216 17:29:50.103439 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-wp94p"] Feb 16 17:29:50 crc kubenswrapper[4870]: I0216 17:29:50.111886 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7b31-account-create-update-ktdzc"] Feb 16 17:29:50 crc kubenswrapper[4870]: I0216 17:29:50.120751 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-x7s6z"] Feb 16 17:29:50 crc kubenswrapper[4870]: I0216 17:29:50.129854 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-cd7qj"] Feb 16 17:29:50 crc kubenswrapper[4870]: I0216 17:29:50.139099 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-2f89-account-create-update-hx5vd"] Feb 16 17:29:50 crc kubenswrapper[4870]: I0216 17:29:50.147710 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-8cad-account-create-update-7qqnt"] Feb 16 17:29:50 crc kubenswrapper[4870]: I0216 17:29:50.158032 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-wp94p"] Feb 16 17:29:50 crc kubenswrapper[4870]: I0216 17:29:50.166768 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-cd7qj"] Feb 16 17:29:50 crc kubenswrapper[4870]: I0216 17:29:50.176200 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7b31-account-create-update-ktdzc"] Feb 16 17:29:50 crc kubenswrapper[4870]: I0216 17:29:50.184919 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-2f89-account-create-update-hx5vd"] Feb 16 17:29:50 crc kubenswrapper[4870]: I0216 17:29:50.234756 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="359ff9c1-712f-4a98-b617-c94a4f7a1843" path="/var/lib/kubelet/pods/359ff9c1-712f-4a98-b617-c94a4f7a1843/volumes" Feb 16 17:29:50 crc kubenswrapper[4870]: I0216 17:29:50.235359 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e18a14e-1e9e-44d9-8ed9-a93214973da3" path="/var/lib/kubelet/pods/3e18a14e-1e9e-44d9-8ed9-a93214973da3/volumes" Feb 16 17:29:50 crc kubenswrapper[4870]: I0216 17:29:50.235916 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c2c95a8-3204-4882-b77c-4f09f82f9b14" path="/var/lib/kubelet/pods/5c2c95a8-3204-4882-b77c-4f09f82f9b14/volumes" Feb 16 17:29:50 crc kubenswrapper[4870]: I0216 17:29:50.236508 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="661affca-3ccf-42b7-9095-eb1dbd2e38fb" path="/var/lib/kubelet/pods/661affca-3ccf-42b7-9095-eb1dbd2e38fb/volumes" Feb 16 17:29:50 crc kubenswrapper[4870]: I0216 17:29:50.237582 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcfc139e-ad4c-4214-9403-73634951cd57" path="/var/lib/kubelet/pods/bcfc139e-ad4c-4214-9403-73634951cd57/volumes" Feb 16 17:29:50 crc kubenswrapper[4870]: I0216 17:29:50.238105 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6500ae3-d348-4473-81db-c795446ba15d" path="/var/lib/kubelet/pods/e6500ae3-d348-4473-81db-c795446ba15d/volumes" Feb 16 17:29:51 crc kubenswrapper[4870]: I0216 17:29:51.224322 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:29:51 crc kubenswrapper[4870]: E0216 17:29:51.224787 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:29:51 crc kubenswrapper[4870]: E0216 17:29:51.225839 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:30:00 crc kubenswrapper[4870]: I0216 17:30:00.144055 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521050-qckqt"] Feb 16 17:30:00 crc kubenswrapper[4870]: E0216 17:30:00.145084 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c22562b-6b5d-40df-a657-22b4a29b6cc0" containerName="registry-server" Feb 16 17:30:00 crc kubenswrapper[4870]: I0216 17:30:00.145103 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c22562b-6b5d-40df-a657-22b4a29b6cc0" containerName="registry-server" Feb 16 17:30:00 crc kubenswrapper[4870]: E0216 17:30:00.145119 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c22562b-6b5d-40df-a657-22b4a29b6cc0" containerName="extract-utilities" Feb 16 17:30:00 crc kubenswrapper[4870]: I0216 17:30:00.145127 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c22562b-6b5d-40df-a657-22b4a29b6cc0" containerName="extract-utilities" Feb 16 17:30:00 crc kubenswrapper[4870]: E0216 17:30:00.145139 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c22562b-6b5d-40df-a657-22b4a29b6cc0" containerName="extract-content" Feb 16 17:30:00 crc kubenswrapper[4870]: I0216 17:30:00.145147 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c22562b-6b5d-40df-a657-22b4a29b6cc0" containerName="extract-content" Feb 16 17:30:00 crc kubenswrapper[4870]: I0216 17:30:00.145389 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c22562b-6b5d-40df-a657-22b4a29b6cc0" containerName="registry-server" Feb 16 17:30:00 crc kubenswrapper[4870]: I0216 17:30:00.146187 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-qckqt" Feb 16 17:30:00 crc kubenswrapper[4870]: I0216 17:30:00.148290 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 17:30:00 crc kubenswrapper[4870]: I0216 17:30:00.148963 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 17:30:00 crc kubenswrapper[4870]: I0216 17:30:00.153552 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521050-qckqt"] Feb 16 17:30:00 crc kubenswrapper[4870]: I0216 17:30:00.223859 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4750d54-1e62-4929-bb98-102c0149deb7-secret-volume\") pod \"collect-profiles-29521050-qckqt\" (UID: \"d4750d54-1e62-4929-bb98-102c0149deb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-qckqt" Feb 16 17:30:00 crc kubenswrapper[4870]: I0216 17:30:00.224025 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spvgs\" (UniqueName: \"kubernetes.io/projected/d4750d54-1e62-4929-bb98-102c0149deb7-kube-api-access-spvgs\") pod \"collect-profiles-29521050-qckqt\" (UID: \"d4750d54-1e62-4929-bb98-102c0149deb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-qckqt" Feb 16 17:30:00 crc kubenswrapper[4870]: I0216 17:30:00.224080 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4750d54-1e62-4929-bb98-102c0149deb7-config-volume\") pod \"collect-profiles-29521050-qckqt\" (UID: \"d4750d54-1e62-4929-bb98-102c0149deb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-qckqt" Feb 16 17:30:00 crc kubenswrapper[4870]: I0216 17:30:00.326407 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spvgs\" (UniqueName: \"kubernetes.io/projected/d4750d54-1e62-4929-bb98-102c0149deb7-kube-api-access-spvgs\") pod \"collect-profiles-29521050-qckqt\" (UID: \"d4750d54-1e62-4929-bb98-102c0149deb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-qckqt" Feb 16 17:30:00 crc kubenswrapper[4870]: I0216 17:30:00.326793 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4750d54-1e62-4929-bb98-102c0149deb7-config-volume\") pod \"collect-profiles-29521050-qckqt\" (UID: \"d4750d54-1e62-4929-bb98-102c0149deb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-qckqt" Feb 16 17:30:00 crc kubenswrapper[4870]: I0216 17:30:00.327012 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4750d54-1e62-4929-bb98-102c0149deb7-secret-volume\") pod \"collect-profiles-29521050-qckqt\" (UID: \"d4750d54-1e62-4929-bb98-102c0149deb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-qckqt" Feb 16 17:30:00 crc kubenswrapper[4870]: I0216 17:30:00.328039 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4750d54-1e62-4929-bb98-102c0149deb7-config-volume\") pod \"collect-profiles-29521050-qckqt\" (UID: \"d4750d54-1e62-4929-bb98-102c0149deb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-qckqt" Feb 16 17:30:00 crc kubenswrapper[4870]: I0216 17:30:00.340029 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4750d54-1e62-4929-bb98-102c0149deb7-secret-volume\") pod \"collect-profiles-29521050-qckqt\" (UID: \"d4750d54-1e62-4929-bb98-102c0149deb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-qckqt" Feb 16 17:30:00 crc kubenswrapper[4870]: I0216 17:30:00.342652 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spvgs\" (UniqueName: \"kubernetes.io/projected/d4750d54-1e62-4929-bb98-102c0149deb7-kube-api-access-spvgs\") pod \"collect-profiles-29521050-qckqt\" (UID: \"d4750d54-1e62-4929-bb98-102c0149deb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-qckqt" Feb 16 17:30:00 crc kubenswrapper[4870]: I0216 17:30:00.468884 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-qckqt" Feb 16 17:30:01 crc kubenswrapper[4870]: I0216 17:30:01.023918 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521050-qckqt"] Feb 16 17:30:01 crc kubenswrapper[4870]: I0216 17:30:01.819534 4870 generic.go:334] "Generic (PLEG): container finished" podID="d4750d54-1e62-4929-bb98-102c0149deb7" containerID="644f35854873ce4b79c95b656dc4ef9f092d530561edee884f5ffd0e26e18efb" exitCode=0 Feb 16 17:30:01 crc kubenswrapper[4870]: I0216 17:30:01.819725 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-qckqt" event={"ID":"d4750d54-1e62-4929-bb98-102c0149deb7","Type":"ContainerDied","Data":"644f35854873ce4b79c95b656dc4ef9f092d530561edee884f5ffd0e26e18efb"} Feb 16 17:30:01 crc kubenswrapper[4870]: I0216 17:30:01.819822 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-qckqt" event={"ID":"d4750d54-1e62-4929-bb98-102c0149deb7","Type":"ContainerStarted","Data":"650e162d3e6760e47f221b8ce306ea0225fc364361b8d3fe0bef0a5c61f6c840"} Feb 16 17:30:02 crc kubenswrapper[4870]: I0216 17:30:02.099926 4870 scope.go:117] "RemoveContainer" containerID="d8c9beafa329b75546850554f4becaafe7906de071b002443984c51563e267ca" Feb 16 17:30:02 crc kubenswrapper[4870]: I0216 17:30:02.123092 4870 scope.go:117] "RemoveContainer" containerID="175b2beea5cbfacea0fc5a1b61a865b288aac569650240f3654fa21897fcd65f" Feb 16 17:30:02 crc kubenswrapper[4870]: I0216 17:30:02.171934 4870 scope.go:117] "RemoveContainer" containerID="a5a3201f0149777fdd0d4553d617da8b73a47f363635814d205deb8705048357" Feb 16 17:30:02 crc kubenswrapper[4870]: I0216 17:30:02.221357 4870 scope.go:117] "RemoveContainer" containerID="386c849778828c9a377e50930556df23b22b8b4accb2587804944c02f91083a0" Feb 16 17:30:02 crc kubenswrapper[4870]: I0216 17:30:02.263034 4870 scope.go:117] "RemoveContainer" containerID="d57972e0f4f6d967fbbc0cf9eb82f7bb17be32dd07c9d14b89dce22d7b89b024" Feb 16 17:30:02 crc kubenswrapper[4870]: I0216 17:30:02.288011 4870 scope.go:117] "RemoveContainer" containerID="ce534793383eb4cbbb9c777a22406bfc631b2ab811d5abfaefb68c6ca5d94d6d" Feb 16 17:30:02 crc kubenswrapper[4870]: I0216 17:30:02.356155 4870 scope.go:117] "RemoveContainer" containerID="04b9bcc99d8c879b4193a41375e6dbe7fdd36833f0b0ddc4e5affd10ffcbab60" Feb 16 17:30:02 crc kubenswrapper[4870]: I0216 17:30:02.411565 4870 scope.go:117] "RemoveContainer" containerID="9d72b11bf11674ab419b9ccad391247f9e47875a3bf9a84ec304b7cde8fe8e03" Feb 16 17:30:02 crc kubenswrapper[4870]: I0216 17:30:02.451234 4870 scope.go:117] "RemoveContainer" containerID="ced4e42140e8357e9b75b81a09e3e73296eac74c47e2086add46bbf17263ca0e" Feb 16 17:30:02 crc kubenswrapper[4870]: I0216 17:30:02.486674 4870 scope.go:117] "RemoveContainer" containerID="c6db80950133e13de04db72ea23b86ab30292ddcc07fc9bb7d88f06fe360a7f0" Feb 16 17:30:03 crc kubenswrapper[4870]: I0216 17:30:03.039051 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wgh2j"] Feb 16 17:30:03 crc kubenswrapper[4870]: I0216 17:30:03.051148 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wgh2j"] Feb 16 17:30:03 crc kubenswrapper[4870]: I0216 17:30:03.174596 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-qckqt" Feb 16 17:30:03 crc kubenswrapper[4870]: E0216 17:30:03.224741 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:30:03 crc kubenswrapper[4870]: I0216 17:30:03.297428 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4750d54-1e62-4929-bb98-102c0149deb7-config-volume\") pod \"d4750d54-1e62-4929-bb98-102c0149deb7\" (UID: \"d4750d54-1e62-4929-bb98-102c0149deb7\") " Feb 16 17:30:03 crc kubenswrapper[4870]: I0216 17:30:03.297502 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spvgs\" (UniqueName: \"kubernetes.io/projected/d4750d54-1e62-4929-bb98-102c0149deb7-kube-api-access-spvgs\") pod \"d4750d54-1e62-4929-bb98-102c0149deb7\" (UID: \"d4750d54-1e62-4929-bb98-102c0149deb7\") " Feb 16 17:30:03 crc kubenswrapper[4870]: I0216 17:30:03.297652 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4750d54-1e62-4929-bb98-102c0149deb7-secret-volume\") pod \"d4750d54-1e62-4929-bb98-102c0149deb7\" (UID: \"d4750d54-1e62-4929-bb98-102c0149deb7\") " Feb 16 17:30:03 crc kubenswrapper[4870]: I0216 17:30:03.298251 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4750d54-1e62-4929-bb98-102c0149deb7-config-volume" (OuterVolumeSpecName: "config-volume") pod "d4750d54-1e62-4929-bb98-102c0149deb7" (UID: "d4750d54-1e62-4929-bb98-102c0149deb7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:30:03 crc kubenswrapper[4870]: I0216 17:30:03.300219 4870 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4750d54-1e62-4929-bb98-102c0149deb7-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 17:30:03 crc kubenswrapper[4870]: I0216 17:30:03.308212 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4750d54-1e62-4929-bb98-102c0149deb7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d4750d54-1e62-4929-bb98-102c0149deb7" (UID: "d4750d54-1e62-4929-bb98-102c0149deb7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:30:03 crc kubenswrapper[4870]: I0216 17:30:03.309293 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4750d54-1e62-4929-bb98-102c0149deb7-kube-api-access-spvgs" (OuterVolumeSpecName: "kube-api-access-spvgs") pod "d4750d54-1e62-4929-bb98-102c0149deb7" (UID: "d4750d54-1e62-4929-bb98-102c0149deb7"). InnerVolumeSpecName "kube-api-access-spvgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:30:03 crc kubenswrapper[4870]: I0216 17:30:03.403734 4870 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4750d54-1e62-4929-bb98-102c0149deb7-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 17:30:03 crc kubenswrapper[4870]: I0216 17:30:03.403783 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spvgs\" (UniqueName: \"kubernetes.io/projected/d4750d54-1e62-4929-bb98-102c0149deb7-kube-api-access-spvgs\") on node \"crc\" DevicePath \"\"" Feb 16 17:30:03 crc kubenswrapper[4870]: I0216 17:30:03.839681 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-qckqt" event={"ID":"d4750d54-1e62-4929-bb98-102c0149deb7","Type":"ContainerDied","Data":"650e162d3e6760e47f221b8ce306ea0225fc364361b8d3fe0bef0a5c61f6c840"} Feb 16 17:30:03 crc kubenswrapper[4870]: I0216 17:30:03.839725 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="650e162d3e6760e47f221b8ce306ea0225fc364361b8d3fe0bef0a5c61f6c840" Feb 16 17:30:03 crc kubenswrapper[4870]: I0216 17:30:03.839750 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-qckqt" Feb 16 17:30:04 crc kubenswrapper[4870]: I0216 17:30:04.239555 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9e28213-754a-4478-8b49-310d4cb4e8bc" path="/var/lib/kubelet/pods/e9e28213-754a-4478-8b49-310d4cb4e8bc/volumes" Feb 16 17:30:06 crc kubenswrapper[4870]: I0216 17:30:06.229833 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:30:06 crc kubenswrapper[4870]: I0216 17:30:06.868794 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerStarted","Data":"5480579f5b413919e2674eb7481044b2c4a41b4dfb3d9d951686496ecef4edf6"} Feb 16 17:30:17 crc kubenswrapper[4870]: I0216 17:30:17.037777 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-kq5cz"] Feb 16 17:30:17 crc kubenswrapper[4870]: I0216 17:30:17.052315 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2953-account-create-update-c479c"] Feb 16 17:30:17 crc kubenswrapper[4870]: I0216 17:30:17.065667 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-kq5cz"] Feb 16 17:30:17 crc kubenswrapper[4870]: I0216 17:30:17.077161 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2953-account-create-update-c479c"] Feb 16 17:30:17 crc kubenswrapper[4870]: E0216 17:30:17.224854 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:30:18 crc kubenswrapper[4870]: I0216 17:30:18.234240 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a5551a3-6fd8-479b-ad29-488078cc5ad1" path="/var/lib/kubelet/pods/3a5551a3-6fd8-479b-ad29-488078cc5ad1/volumes" Feb 16 17:30:18 crc kubenswrapper[4870]: I0216 17:30:18.234834 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a05594c3-a7c9-4326-83d3-8602b8077b29" path="/var/lib/kubelet/pods/a05594c3-a7c9-4326-83d3-8602b8077b29/volumes" Feb 16 17:30:20 crc kubenswrapper[4870]: I0216 17:30:20.050185 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-m2l86"] Feb 16 17:30:20 crc kubenswrapper[4870]: I0216 17:30:20.064908 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-9txqv"] Feb 16 17:30:20 crc kubenswrapper[4870]: I0216 17:30:20.083798 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-m2l86"] Feb 16 17:30:20 crc kubenswrapper[4870]: I0216 17:30:20.093983 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9496-account-create-update-vvscs"] Feb 16 17:30:20 crc kubenswrapper[4870]: I0216 17:30:20.106091 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-9txqv"] Feb 16 17:30:20 crc kubenswrapper[4870]: I0216 17:30:20.117018 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8204-account-create-update-96fmd"] Feb 16 17:30:20 crc kubenswrapper[4870]: I0216 17:30:20.128287 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-dfjd4"] Feb 16 17:30:20 crc kubenswrapper[4870]: I0216 17:30:20.137382 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-93e7-account-create-update-54hdz"] Feb 16 17:30:20 crc kubenswrapper[4870]: I0216 17:30:20.145798 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8204-account-create-update-96fmd"] Feb 16 17:30:20 crc kubenswrapper[4870]: I0216 17:30:20.154553 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9496-account-create-update-vvscs"] Feb 16 17:30:20 crc kubenswrapper[4870]: I0216 17:30:20.163693 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-dfjd4"] Feb 16 17:30:20 crc kubenswrapper[4870]: I0216 17:30:20.172345 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-93e7-account-create-update-54hdz"] Feb 16 17:30:20 crc kubenswrapper[4870]: I0216 17:30:20.234337 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aaab981-61b9-43df-a72c-8543ad202980" path="/var/lib/kubelet/pods/0aaab981-61b9-43df-a72c-8543ad202980/volumes" Feb 16 17:30:20 crc kubenswrapper[4870]: I0216 17:30:20.234901 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="387afd6f-c438-44b0-ba75-db7c0ecd911b" path="/var/lib/kubelet/pods/387afd6f-c438-44b0-ba75-db7c0ecd911b/volumes" Feb 16 17:30:20 crc kubenswrapper[4870]: I0216 17:30:20.235498 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70ee1b4f-d0ae-44c6-89fa-03712279a648" path="/var/lib/kubelet/pods/70ee1b4f-d0ae-44c6-89fa-03712279a648/volumes" Feb 16 17:30:20 crc kubenswrapper[4870]: I0216 17:30:20.236051 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b165ae4-b42a-4351-8bde-e88b7fa65137" path="/var/lib/kubelet/pods/7b165ae4-b42a-4351-8bde-e88b7fa65137/volumes" Feb 16 17:30:20 crc kubenswrapper[4870]: I0216 17:30:20.237091 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7efe4b83-3d8c-4e4a-a7e0-0bbccb160506" path="/var/lib/kubelet/pods/7efe4b83-3d8c-4e4a-a7e0-0bbccb160506/volumes" Feb 16 17:30:20 crc kubenswrapper[4870]: I0216 17:30:20.237607 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c8030e8-0df0-478c-83b8-2144a0402358" path="/var/lib/kubelet/pods/9c8030e8-0df0-478c-83b8-2144a0402358/volumes" Feb 16 17:30:26 crc kubenswrapper[4870]: I0216 17:30:26.039634 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-x74v2"] Feb 16 17:30:26 crc kubenswrapper[4870]: I0216 17:30:26.068097 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-x74v2"] Feb 16 17:30:26 crc kubenswrapper[4870]: I0216 17:30:26.238654 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="248e9264-b0ec-412b-aa16-0c3869d5f245" path="/var/lib/kubelet/pods/248e9264-b0ec-412b-aa16-0c3869d5f245/volumes" Feb 16 17:30:30 crc kubenswrapper[4870]: E0216 17:30:30.225394 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:30:41 crc kubenswrapper[4870]: E0216 17:30:41.226133 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:30:51 crc kubenswrapper[4870]: I0216 17:30:51.059163 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-5r2tl"] Feb 16 17:30:51 crc kubenswrapper[4870]: I0216 17:30:51.062664 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-vmbrl"] Feb 16 17:30:51 crc kubenswrapper[4870]: I0216 17:30:51.073085 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-5r2tl"] Feb 16 17:30:51 crc kubenswrapper[4870]: I0216 17:30:51.081672 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-vmbrl"] Feb 16 17:30:52 crc kubenswrapper[4870]: I0216 17:30:52.197666 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7ks2v"] Feb 16 17:30:52 crc kubenswrapper[4870]: E0216 17:30:52.198534 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4750d54-1e62-4929-bb98-102c0149deb7" containerName="collect-profiles" Feb 16 17:30:52 crc kubenswrapper[4870]: I0216 17:30:52.198547 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4750d54-1e62-4929-bb98-102c0149deb7" containerName="collect-profiles" Feb 16 17:30:52 crc kubenswrapper[4870]: I0216 17:30:52.198730 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4750d54-1e62-4929-bb98-102c0149deb7" containerName="collect-profiles" Feb 16 17:30:52 crc kubenswrapper[4870]: I0216 17:30:52.200210 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ks2v" Feb 16 17:30:52 crc kubenswrapper[4870]: I0216 17:30:52.211357 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7ks2v"] Feb 16 17:30:52 crc kubenswrapper[4870]: E0216 17:30:52.228657 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:30:52 crc kubenswrapper[4870]: I0216 17:30:52.242606 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="913c8c11-d196-4f95-9aba-a4552bcbef88" path="/var/lib/kubelet/pods/913c8c11-d196-4f95-9aba-a4552bcbef88/volumes" Feb 16 17:30:52 crc kubenswrapper[4870]: I0216 17:30:52.243223 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="998e2386-0941-4f2b-8e23-d77138831ad4" path="/var/lib/kubelet/pods/998e2386-0941-4f2b-8e23-d77138831ad4/volumes" Feb 16 17:30:52 crc kubenswrapper[4870]: I0216 17:30:52.271670 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0cb5107-99f1-4ecd-9172-90085dda2643-utilities\") pod \"community-operators-7ks2v\" (UID: \"a0cb5107-99f1-4ecd-9172-90085dda2643\") " pod="openshift-marketplace/community-operators-7ks2v" Feb 16 17:30:52 crc kubenswrapper[4870]: I0216 17:30:52.271974 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbdk9\" (UniqueName: \"kubernetes.io/projected/a0cb5107-99f1-4ecd-9172-90085dda2643-kube-api-access-zbdk9\") pod \"community-operators-7ks2v\" (UID: \"a0cb5107-99f1-4ecd-9172-90085dda2643\") " pod="openshift-marketplace/community-operators-7ks2v" Feb 16 17:30:52 crc kubenswrapper[4870]: I0216 17:30:52.272425 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0cb5107-99f1-4ecd-9172-90085dda2643-catalog-content\") pod \"community-operators-7ks2v\" (UID: \"a0cb5107-99f1-4ecd-9172-90085dda2643\") " pod="openshift-marketplace/community-operators-7ks2v" Feb 16 17:30:52 crc kubenswrapper[4870]: I0216 17:30:52.374992 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0cb5107-99f1-4ecd-9172-90085dda2643-catalog-content\") pod \"community-operators-7ks2v\" (UID: \"a0cb5107-99f1-4ecd-9172-90085dda2643\") " pod="openshift-marketplace/community-operators-7ks2v" Feb 16 17:30:52 crc kubenswrapper[4870]: I0216 17:30:52.375253 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0cb5107-99f1-4ecd-9172-90085dda2643-utilities\") pod \"community-operators-7ks2v\" (UID: \"a0cb5107-99f1-4ecd-9172-90085dda2643\") " pod="openshift-marketplace/community-operators-7ks2v" Feb 16 17:30:52 crc kubenswrapper[4870]: I0216 17:30:52.375421 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbdk9\" (UniqueName: \"kubernetes.io/projected/a0cb5107-99f1-4ecd-9172-90085dda2643-kube-api-access-zbdk9\") pod \"community-operators-7ks2v\" (UID: \"a0cb5107-99f1-4ecd-9172-90085dda2643\") " pod="openshift-marketplace/community-operators-7ks2v" Feb 16 17:30:52 crc kubenswrapper[4870]: I0216 17:30:52.375492 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0cb5107-99f1-4ecd-9172-90085dda2643-catalog-content\") pod \"community-operators-7ks2v\" (UID: \"a0cb5107-99f1-4ecd-9172-90085dda2643\") " pod="openshift-marketplace/community-operators-7ks2v" Feb 16 17:30:52 crc kubenswrapper[4870]: I0216 17:30:52.375971 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0cb5107-99f1-4ecd-9172-90085dda2643-utilities\") pod \"community-operators-7ks2v\" (UID: \"a0cb5107-99f1-4ecd-9172-90085dda2643\") " pod="openshift-marketplace/community-operators-7ks2v" Feb 16 17:30:52 crc kubenswrapper[4870]: I0216 17:30:52.393754 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbdk9\" (UniqueName: \"kubernetes.io/projected/a0cb5107-99f1-4ecd-9172-90085dda2643-kube-api-access-zbdk9\") pod \"community-operators-7ks2v\" (UID: \"a0cb5107-99f1-4ecd-9172-90085dda2643\") " pod="openshift-marketplace/community-operators-7ks2v" Feb 16 17:30:52 crc kubenswrapper[4870]: I0216 17:30:52.538148 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ks2v" Feb 16 17:30:53 crc kubenswrapper[4870]: I0216 17:30:53.051154 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7ks2v"] Feb 16 17:30:53 crc kubenswrapper[4870]: I0216 17:30:53.307266 4870 generic.go:334] "Generic (PLEG): container finished" podID="a0cb5107-99f1-4ecd-9172-90085dda2643" containerID="121b338d8ce9f60cbb861d76db83237669e21850f8ed63d42992805e5863fa44" exitCode=0 Feb 16 17:30:53 crc kubenswrapper[4870]: I0216 17:30:53.307323 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ks2v" event={"ID":"a0cb5107-99f1-4ecd-9172-90085dda2643","Type":"ContainerDied","Data":"121b338d8ce9f60cbb861d76db83237669e21850f8ed63d42992805e5863fa44"} Feb 16 17:30:53 crc kubenswrapper[4870]: I0216 17:30:53.307354 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ks2v" event={"ID":"a0cb5107-99f1-4ecd-9172-90085dda2643","Type":"ContainerStarted","Data":"3dd8bfc9af675cc3ac02a7e9fecc4c4d970e4d36209c205a78ca10c74cb0cc85"} Feb 16 17:30:53 crc kubenswrapper[4870]: I0216 17:30:53.309273 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:30:54 crc kubenswrapper[4870]: I0216 17:30:54.317816 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ks2v" event={"ID":"a0cb5107-99f1-4ecd-9172-90085dda2643","Type":"ContainerStarted","Data":"1487de56b62372aa54351f956dcddc3e87cc62dfc58c25c4215b649b1f2d439d"} Feb 16 17:30:54 crc kubenswrapper[4870]: I0216 17:30:54.386281 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d8nrr"] Feb 16 17:30:54 crc kubenswrapper[4870]: I0216 17:30:54.392447 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d8nrr" Feb 16 17:30:54 crc kubenswrapper[4870]: I0216 17:30:54.402311 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d8nrr"] Feb 16 17:30:54 crc kubenswrapper[4870]: I0216 17:30:54.424231 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shrgl\" (UniqueName: \"kubernetes.io/projected/0b4cd71e-c237-4f98-a315-94e61c1a4fec-kube-api-access-shrgl\") pod \"redhat-operators-d8nrr\" (UID: \"0b4cd71e-c237-4f98-a315-94e61c1a4fec\") " pod="openshift-marketplace/redhat-operators-d8nrr" Feb 16 17:30:54 crc kubenswrapper[4870]: I0216 17:30:54.424387 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b4cd71e-c237-4f98-a315-94e61c1a4fec-catalog-content\") pod \"redhat-operators-d8nrr\" (UID: \"0b4cd71e-c237-4f98-a315-94e61c1a4fec\") " pod="openshift-marketplace/redhat-operators-d8nrr" Feb 16 17:30:54 crc kubenswrapper[4870]: I0216 17:30:54.424456 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b4cd71e-c237-4f98-a315-94e61c1a4fec-utilities\") pod \"redhat-operators-d8nrr\" (UID: \"0b4cd71e-c237-4f98-a315-94e61c1a4fec\") " pod="openshift-marketplace/redhat-operators-d8nrr" Feb 16 17:30:54 crc kubenswrapper[4870]: I0216 17:30:54.525976 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b4cd71e-c237-4f98-a315-94e61c1a4fec-utilities\") pod \"redhat-operators-d8nrr\" (UID: \"0b4cd71e-c237-4f98-a315-94e61c1a4fec\") " pod="openshift-marketplace/redhat-operators-d8nrr" Feb 16 17:30:54 crc kubenswrapper[4870]: I0216 17:30:54.526081 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shrgl\" (UniqueName: \"kubernetes.io/projected/0b4cd71e-c237-4f98-a315-94e61c1a4fec-kube-api-access-shrgl\") pod \"redhat-operators-d8nrr\" (UID: \"0b4cd71e-c237-4f98-a315-94e61c1a4fec\") " pod="openshift-marketplace/redhat-operators-d8nrr" Feb 16 17:30:54 crc kubenswrapper[4870]: I0216 17:30:54.526251 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b4cd71e-c237-4f98-a315-94e61c1a4fec-catalog-content\") pod \"redhat-operators-d8nrr\" (UID: \"0b4cd71e-c237-4f98-a315-94e61c1a4fec\") " pod="openshift-marketplace/redhat-operators-d8nrr" Feb 16 17:30:54 crc kubenswrapper[4870]: I0216 17:30:54.526549 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b4cd71e-c237-4f98-a315-94e61c1a4fec-utilities\") pod \"redhat-operators-d8nrr\" (UID: \"0b4cd71e-c237-4f98-a315-94e61c1a4fec\") " pod="openshift-marketplace/redhat-operators-d8nrr" Feb 16 17:30:54 crc kubenswrapper[4870]: I0216 17:30:54.526711 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b4cd71e-c237-4f98-a315-94e61c1a4fec-catalog-content\") pod \"redhat-operators-d8nrr\" (UID: \"0b4cd71e-c237-4f98-a315-94e61c1a4fec\") " pod="openshift-marketplace/redhat-operators-d8nrr" Feb 16 17:30:54 crc kubenswrapper[4870]: I0216 17:30:54.550853 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shrgl\" (UniqueName: \"kubernetes.io/projected/0b4cd71e-c237-4f98-a315-94e61c1a4fec-kube-api-access-shrgl\") pod \"redhat-operators-d8nrr\" (UID: \"0b4cd71e-c237-4f98-a315-94e61c1a4fec\") " pod="openshift-marketplace/redhat-operators-d8nrr" Feb 16 17:30:54 crc kubenswrapper[4870]: I0216 17:30:54.722143 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d8nrr" Feb 16 17:30:55 crc kubenswrapper[4870]: I0216 17:30:55.221282 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d8nrr"] Feb 16 17:30:55 crc kubenswrapper[4870]: W0216 17:30:55.226467 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b4cd71e_c237_4f98_a315_94e61c1a4fec.slice/crio-e50c6cb8f3ea38413dbebdc063d3b75800835c0bd4f605f2cf3bb6fabc446a8f WatchSource:0}: Error finding container e50c6cb8f3ea38413dbebdc063d3b75800835c0bd4f605f2cf3bb6fabc446a8f: Status 404 returned error can't find the container with id e50c6cb8f3ea38413dbebdc063d3b75800835c0bd4f605f2cf3bb6fabc446a8f Feb 16 17:30:55 crc kubenswrapper[4870]: I0216 17:30:55.335418 4870 generic.go:334] "Generic (PLEG): container finished" podID="a0cb5107-99f1-4ecd-9172-90085dda2643" containerID="1487de56b62372aa54351f956dcddc3e87cc62dfc58c25c4215b649b1f2d439d" exitCode=0 Feb 16 17:30:55 crc kubenswrapper[4870]: I0216 17:30:55.335485 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ks2v" event={"ID":"a0cb5107-99f1-4ecd-9172-90085dda2643","Type":"ContainerDied","Data":"1487de56b62372aa54351f956dcddc3e87cc62dfc58c25c4215b649b1f2d439d"} Feb 16 17:30:55 crc kubenswrapper[4870]: I0216 17:30:55.336803 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d8nrr" event={"ID":"0b4cd71e-c237-4f98-a315-94e61c1a4fec","Type":"ContainerStarted","Data":"e50c6cb8f3ea38413dbebdc063d3b75800835c0bd4f605f2cf3bb6fabc446a8f"} Feb 16 17:30:56 crc kubenswrapper[4870]: I0216 17:30:56.347724 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ks2v" event={"ID":"a0cb5107-99f1-4ecd-9172-90085dda2643","Type":"ContainerStarted","Data":"d2f42cb518b97e90fce09a438272f246df6f4acd3a2ba11fbda78c3f50086733"} Feb 16 17:30:56 crc kubenswrapper[4870]: I0216 17:30:56.349533 4870 generic.go:334] "Generic (PLEG): container finished" podID="0b4cd71e-c237-4f98-a315-94e61c1a4fec" containerID="a22f674d6b8833e781a8b3c59e6d59b766ec377e27728f004ca09d3eb6552f8a" exitCode=0 Feb 16 17:30:56 crc kubenswrapper[4870]: I0216 17:30:56.349571 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d8nrr" event={"ID":"0b4cd71e-c237-4f98-a315-94e61c1a4fec","Type":"ContainerDied","Data":"a22f674d6b8833e781a8b3c59e6d59b766ec377e27728f004ca09d3eb6552f8a"} Feb 16 17:30:56 crc kubenswrapper[4870]: I0216 17:30:56.368797 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7ks2v" podStartSLOduration=1.945390682 podStartE2EDuration="4.368772528s" podCreationTimestamp="2026-02-16 17:30:52 +0000 UTC" firstStartedPulling="2026-02-16 17:30:53.309071332 +0000 UTC m=+1857.792535716" lastFinishedPulling="2026-02-16 17:30:55.732453188 +0000 UTC m=+1860.215917562" observedRunningTime="2026-02-16 17:30:56.365632649 +0000 UTC m=+1860.849097033" watchObservedRunningTime="2026-02-16 17:30:56.368772528 +0000 UTC m=+1860.852236912" Feb 16 17:30:57 crc kubenswrapper[4870]: I0216 17:30:57.364104 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d8nrr" event={"ID":"0b4cd71e-c237-4f98-a315-94e61c1a4fec","Type":"ContainerStarted","Data":"b45934f6f8f1fddae1373b803607e1bbf0588d4b3399d57f5e9fcb070cb9b8a9"} Feb 16 17:30:59 crc kubenswrapper[4870]: I0216 17:30:59.386147 4870 generic.go:334] "Generic (PLEG): container finished" podID="0b4cd71e-c237-4f98-a315-94e61c1a4fec" containerID="b45934f6f8f1fddae1373b803607e1bbf0588d4b3399d57f5e9fcb070cb9b8a9" exitCode=0 Feb 16 17:30:59 crc kubenswrapper[4870]: I0216 17:30:59.386227 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d8nrr" event={"ID":"0b4cd71e-c237-4f98-a315-94e61c1a4fec","Type":"ContainerDied","Data":"b45934f6f8f1fddae1373b803607e1bbf0588d4b3399d57f5e9fcb070cb9b8a9"} Feb 16 17:31:00 crc kubenswrapper[4870]: I0216 17:31:00.408228 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d8nrr" event={"ID":"0b4cd71e-c237-4f98-a315-94e61c1a4fec","Type":"ContainerStarted","Data":"0ee47f7b5858777fc1f3752f4d4bf374de19d4753de103840ac6ff7ca57846f1"} Feb 16 17:31:00 crc kubenswrapper[4870]: I0216 17:31:00.441308 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d8nrr" podStartSLOduration=2.784686921 podStartE2EDuration="6.441288952s" podCreationTimestamp="2026-02-16 17:30:54 +0000 UTC" firstStartedPulling="2026-02-16 17:30:56.351239728 +0000 UTC m=+1860.834704102" lastFinishedPulling="2026-02-16 17:31:00.007841749 +0000 UTC m=+1864.491306133" observedRunningTime="2026-02-16 17:31:00.436115584 +0000 UTC m=+1864.919579988" watchObservedRunningTime="2026-02-16 17:31:00.441288952 +0000 UTC m=+1864.924753336" Feb 16 17:31:02 crc kubenswrapper[4870]: I0216 17:31:02.539236 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7ks2v" Feb 16 17:31:02 crc kubenswrapper[4870]: I0216 17:31:02.541094 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7ks2v" Feb 16 17:31:02 crc kubenswrapper[4870]: I0216 17:31:02.588576 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7ks2v" Feb 16 17:31:02 crc kubenswrapper[4870]: I0216 17:31:02.645672 4870 scope.go:117] "RemoveContainer" containerID="07001bc8b8a2ae76e78241c9cfb34835feb178c03540945ad4518828f2d8f866" Feb 16 17:31:02 crc kubenswrapper[4870]: I0216 17:31:02.668958 4870 scope.go:117] "RemoveContainer" containerID="5e2622801776ff1c1cd43fd2ec2e7f94f8dcbc4d95b6b312535e6b3a306936fe" Feb 16 17:31:02 crc kubenswrapper[4870]: I0216 17:31:02.733963 4870 scope.go:117] "RemoveContainer" containerID="088d4deb00e2be749a3a4320a97cd43fd3e799d4cdac734ca4cf94854cca8a4e" Feb 16 17:31:02 crc kubenswrapper[4870]: I0216 17:31:02.775016 4870 scope.go:117] "RemoveContainer" containerID="e7ff58a46ba14d759833386122a7230b205cdc58ce52a1221f1f4038bd096973" Feb 16 17:31:02 crc kubenswrapper[4870]: I0216 17:31:02.833216 4870 scope.go:117] "RemoveContainer" containerID="1bd04f1d0f661e8bd9a2941989e2c6b92edd63e39b11c93d58af6cb1e82330ad" Feb 16 17:31:02 crc kubenswrapper[4870]: I0216 17:31:02.887220 4870 scope.go:117] "RemoveContainer" containerID="c4eb87cdfbe2c84d44ae4f5bfb73ed4f19e3b389a7bb4ebfdcf82c640f76c0b2" Feb 16 17:31:02 crc kubenswrapper[4870]: I0216 17:31:02.932401 4870 scope.go:117] "RemoveContainer" containerID="4d67941051f0bb1162b8b38cc14716c6a202053b1fb2fcc798f8cc37fbbd5355" Feb 16 17:31:02 crc kubenswrapper[4870]: I0216 17:31:02.951637 4870 scope.go:117] "RemoveContainer" containerID="910b06325791e2333278b2e4bcc64593d2af14b61cf3622c10f5a89387139721" Feb 16 17:31:02 crc kubenswrapper[4870]: I0216 17:31:02.970195 4870 scope.go:117] "RemoveContainer" containerID="c0c71372013b216ad72c7f4d488af2b32187b99797ebcfea00c982c3d29fd08a" Feb 16 17:31:02 crc kubenswrapper[4870]: I0216 17:31:02.992230 4870 scope.go:117] "RemoveContainer" containerID="2a6bc0ee4027889558f6b7a4fce9de3b3296fcd1cfa1a1b7bb384094461632ce" Feb 16 17:31:03 crc kubenswrapper[4870]: I0216 17:31:03.037258 4870 scope.go:117] "RemoveContainer" containerID="2c35b26d7811f92ff1318317667c229c8b097ce669520923a326520d13548b7b" Feb 16 17:31:03 crc kubenswrapper[4870]: I0216 17:31:03.066309 4870 scope.go:117] "RemoveContainer" containerID="d1ab562c0cfd6ac7977bea4748ed668dd646fb38dd9ae0a02e182c84fd7f5276" Feb 16 17:31:03 crc kubenswrapper[4870]: I0216 17:31:03.524761 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7ks2v" Feb 16 17:31:03 crc kubenswrapper[4870]: I0216 17:31:03.979341 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7ks2v"] Feb 16 17:31:04 crc kubenswrapper[4870]: E0216 17:31:04.225229 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:31:04 crc kubenswrapper[4870]: I0216 17:31:04.723212 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d8nrr" Feb 16 17:31:04 crc kubenswrapper[4870]: I0216 17:31:04.723272 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d8nrr" Feb 16 17:31:05 crc kubenswrapper[4870]: I0216 17:31:05.475861 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7ks2v" podUID="a0cb5107-99f1-4ecd-9172-90085dda2643" containerName="registry-server" containerID="cri-o://d2f42cb518b97e90fce09a438272f246df6f4acd3a2ba11fbda78c3f50086733" gracePeriod=2 Feb 16 17:31:05 crc kubenswrapper[4870]: I0216 17:31:05.778755 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d8nrr" podUID="0b4cd71e-c237-4f98-a315-94e61c1a4fec" containerName="registry-server" probeResult="failure" output=< Feb 16 17:31:05 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Feb 16 17:31:05 crc kubenswrapper[4870]: > Feb 16 17:31:05 crc kubenswrapper[4870]: I0216 17:31:05.941084 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ks2v" Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.068759 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0cb5107-99f1-4ecd-9172-90085dda2643-utilities\") pod \"a0cb5107-99f1-4ecd-9172-90085dda2643\" (UID: \"a0cb5107-99f1-4ecd-9172-90085dda2643\") " Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.068847 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0cb5107-99f1-4ecd-9172-90085dda2643-catalog-content\") pod \"a0cb5107-99f1-4ecd-9172-90085dda2643\" (UID: \"a0cb5107-99f1-4ecd-9172-90085dda2643\") " Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.069120 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbdk9\" (UniqueName: \"kubernetes.io/projected/a0cb5107-99f1-4ecd-9172-90085dda2643-kube-api-access-zbdk9\") pod \"a0cb5107-99f1-4ecd-9172-90085dda2643\" (UID: \"a0cb5107-99f1-4ecd-9172-90085dda2643\") " Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.069210 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0cb5107-99f1-4ecd-9172-90085dda2643-utilities" (OuterVolumeSpecName: "utilities") pod "a0cb5107-99f1-4ecd-9172-90085dda2643" (UID: "a0cb5107-99f1-4ecd-9172-90085dda2643"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.069870 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0cb5107-99f1-4ecd-9172-90085dda2643-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.104030 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0cb5107-99f1-4ecd-9172-90085dda2643-kube-api-access-zbdk9" (OuterVolumeSpecName: "kube-api-access-zbdk9") pod "a0cb5107-99f1-4ecd-9172-90085dda2643" (UID: "a0cb5107-99f1-4ecd-9172-90085dda2643"). InnerVolumeSpecName "kube-api-access-zbdk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.122597 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0cb5107-99f1-4ecd-9172-90085dda2643-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0cb5107-99f1-4ecd-9172-90085dda2643" (UID: "a0cb5107-99f1-4ecd-9172-90085dda2643"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.172131 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbdk9\" (UniqueName: \"kubernetes.io/projected/a0cb5107-99f1-4ecd-9172-90085dda2643-kube-api-access-zbdk9\") on node \"crc\" DevicePath \"\"" Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.172167 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0cb5107-99f1-4ecd-9172-90085dda2643-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.487597 4870 generic.go:334] "Generic (PLEG): container finished" podID="a0cb5107-99f1-4ecd-9172-90085dda2643" containerID="d2f42cb518b97e90fce09a438272f246df6f4acd3a2ba11fbda78c3f50086733" exitCode=0 Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.487660 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ks2v" event={"ID":"a0cb5107-99f1-4ecd-9172-90085dda2643","Type":"ContainerDied","Data":"d2f42cb518b97e90fce09a438272f246df6f4acd3a2ba11fbda78c3f50086733"} Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.487704 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ks2v" event={"ID":"a0cb5107-99f1-4ecd-9172-90085dda2643","Type":"ContainerDied","Data":"3dd8bfc9af675cc3ac02a7e9fecc4c4d970e4d36209c205a78ca10c74cb0cc85"} Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.487717 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ks2v" Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.487730 4870 scope.go:117] "RemoveContainer" containerID="d2f42cb518b97e90fce09a438272f246df6f4acd3a2ba11fbda78c3f50086733" Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.517779 4870 scope.go:117] "RemoveContainer" containerID="1487de56b62372aa54351f956dcddc3e87cc62dfc58c25c4215b649b1f2d439d" Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.518190 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7ks2v"] Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.530222 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7ks2v"] Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.538434 4870 scope.go:117] "RemoveContainer" containerID="121b338d8ce9f60cbb861d76db83237669e21850f8ed63d42992805e5863fa44" Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.611428 4870 scope.go:117] "RemoveContainer" containerID="d2f42cb518b97e90fce09a438272f246df6f4acd3a2ba11fbda78c3f50086733" Feb 16 17:31:06 crc kubenswrapper[4870]: E0216 17:31:06.611888 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2f42cb518b97e90fce09a438272f246df6f4acd3a2ba11fbda78c3f50086733\": container with ID starting with d2f42cb518b97e90fce09a438272f246df6f4acd3a2ba11fbda78c3f50086733 not found: ID does not exist" containerID="d2f42cb518b97e90fce09a438272f246df6f4acd3a2ba11fbda78c3f50086733" Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.611922 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2f42cb518b97e90fce09a438272f246df6f4acd3a2ba11fbda78c3f50086733"} err="failed to get container status \"d2f42cb518b97e90fce09a438272f246df6f4acd3a2ba11fbda78c3f50086733\": rpc error: code = NotFound desc = could not find container \"d2f42cb518b97e90fce09a438272f246df6f4acd3a2ba11fbda78c3f50086733\": container with ID starting with d2f42cb518b97e90fce09a438272f246df6f4acd3a2ba11fbda78c3f50086733 not found: ID does not exist" Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.611941 4870 scope.go:117] "RemoveContainer" containerID="1487de56b62372aa54351f956dcddc3e87cc62dfc58c25c4215b649b1f2d439d" Feb 16 17:31:06 crc kubenswrapper[4870]: E0216 17:31:06.612216 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1487de56b62372aa54351f956dcddc3e87cc62dfc58c25c4215b649b1f2d439d\": container with ID starting with 1487de56b62372aa54351f956dcddc3e87cc62dfc58c25c4215b649b1f2d439d not found: ID does not exist" containerID="1487de56b62372aa54351f956dcddc3e87cc62dfc58c25c4215b649b1f2d439d" Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.612239 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1487de56b62372aa54351f956dcddc3e87cc62dfc58c25c4215b649b1f2d439d"} err="failed to get container status \"1487de56b62372aa54351f956dcddc3e87cc62dfc58c25c4215b649b1f2d439d\": rpc error: code = NotFound desc = could not find container \"1487de56b62372aa54351f956dcddc3e87cc62dfc58c25c4215b649b1f2d439d\": container with ID starting with 1487de56b62372aa54351f956dcddc3e87cc62dfc58c25c4215b649b1f2d439d not found: ID does not exist" Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.612255 4870 scope.go:117] "RemoveContainer" containerID="121b338d8ce9f60cbb861d76db83237669e21850f8ed63d42992805e5863fa44" Feb 16 17:31:06 crc kubenswrapper[4870]: E0216 17:31:06.612550 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"121b338d8ce9f60cbb861d76db83237669e21850f8ed63d42992805e5863fa44\": container with ID starting with 121b338d8ce9f60cbb861d76db83237669e21850f8ed63d42992805e5863fa44 not found: ID does not exist" containerID="121b338d8ce9f60cbb861d76db83237669e21850f8ed63d42992805e5863fa44" Feb 16 17:31:06 crc kubenswrapper[4870]: I0216 17:31:06.612573 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"121b338d8ce9f60cbb861d76db83237669e21850f8ed63d42992805e5863fa44"} err="failed to get container status \"121b338d8ce9f60cbb861d76db83237669e21850f8ed63d42992805e5863fa44\": rpc error: code = NotFound desc = could not find container \"121b338d8ce9f60cbb861d76db83237669e21850f8ed63d42992805e5863fa44\": container with ID starting with 121b338d8ce9f60cbb861d76db83237669e21850f8ed63d42992805e5863fa44 not found: ID does not exist" Feb 16 17:31:08 crc kubenswrapper[4870]: I0216 17:31:08.040832 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-s4xns"] Feb 16 17:31:08 crc kubenswrapper[4870]: I0216 17:31:08.049839 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-s4xns"] Feb 16 17:31:08 crc kubenswrapper[4870]: I0216 17:31:08.059094 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-7qf8f"] Feb 16 17:31:08 crc kubenswrapper[4870]: I0216 17:31:08.068136 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-9n2tj"] Feb 16 17:31:08 crc kubenswrapper[4870]: I0216 17:31:08.079691 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-9n2tj"] Feb 16 17:31:08 crc kubenswrapper[4870]: I0216 17:31:08.088079 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-7qf8f"] Feb 16 17:31:08 crc kubenswrapper[4870]: I0216 17:31:08.234918 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="375ecf8f-1d93-40fb-85dc-c0eabcef46c3" path="/var/lib/kubelet/pods/375ecf8f-1d93-40fb-85dc-c0eabcef46c3/volumes" Feb 16 17:31:08 crc kubenswrapper[4870]: I0216 17:31:08.235578 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c6489e4-d44c-4e7d-a451-620da210060e" path="/var/lib/kubelet/pods/6c6489e4-d44c-4e7d-a451-620da210060e/volumes" Feb 16 17:31:08 crc kubenswrapper[4870]: I0216 17:31:08.236124 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0cb5107-99f1-4ecd-9172-90085dda2643" path="/var/lib/kubelet/pods/a0cb5107-99f1-4ecd-9172-90085dda2643/volumes" Feb 16 17:31:08 crc kubenswrapper[4870]: I0216 17:31:08.237538 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e04460b7-407f-4474-bf99-264869cf6529" path="/var/lib/kubelet/pods/e04460b7-407f-4474-bf99-264869cf6529/volumes" Feb 16 17:31:14 crc kubenswrapper[4870]: I0216 17:31:14.812107 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d8nrr" Feb 16 17:31:14 crc kubenswrapper[4870]: I0216 17:31:14.887851 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d8nrr" Feb 16 17:31:15 crc kubenswrapper[4870]: I0216 17:31:15.053443 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d8nrr"] Feb 16 17:31:16 crc kubenswrapper[4870]: I0216 17:31:16.596661 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d8nrr" podUID="0b4cd71e-c237-4f98-a315-94e61c1a4fec" containerName="registry-server" containerID="cri-o://0ee47f7b5858777fc1f3752f4d4bf374de19d4753de103840ac6ff7ca57846f1" gracePeriod=2 Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.095699 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d8nrr" Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.204665 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shrgl\" (UniqueName: \"kubernetes.io/projected/0b4cd71e-c237-4f98-a315-94e61c1a4fec-kube-api-access-shrgl\") pod \"0b4cd71e-c237-4f98-a315-94e61c1a4fec\" (UID: \"0b4cd71e-c237-4f98-a315-94e61c1a4fec\") " Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.204805 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b4cd71e-c237-4f98-a315-94e61c1a4fec-catalog-content\") pod \"0b4cd71e-c237-4f98-a315-94e61c1a4fec\" (UID: \"0b4cd71e-c237-4f98-a315-94e61c1a4fec\") " Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.204964 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b4cd71e-c237-4f98-a315-94e61c1a4fec-utilities\") pod \"0b4cd71e-c237-4f98-a315-94e61c1a4fec\" (UID: \"0b4cd71e-c237-4f98-a315-94e61c1a4fec\") " Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.205644 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b4cd71e-c237-4f98-a315-94e61c1a4fec-utilities" (OuterVolumeSpecName: "utilities") pod "0b4cd71e-c237-4f98-a315-94e61c1a4fec" (UID: "0b4cd71e-c237-4f98-a315-94e61c1a4fec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.211137 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b4cd71e-c237-4f98-a315-94e61c1a4fec-kube-api-access-shrgl" (OuterVolumeSpecName: "kube-api-access-shrgl") pod "0b4cd71e-c237-4f98-a315-94e61c1a4fec" (UID: "0b4cd71e-c237-4f98-a315-94e61c1a4fec"). InnerVolumeSpecName "kube-api-access-shrgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.307401 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b4cd71e-c237-4f98-a315-94e61c1a4fec-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.307439 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shrgl\" (UniqueName: \"kubernetes.io/projected/0b4cd71e-c237-4f98-a315-94e61c1a4fec-kube-api-access-shrgl\") on node \"crc\" DevicePath \"\"" Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.336486 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b4cd71e-c237-4f98-a315-94e61c1a4fec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b4cd71e-c237-4f98-a315-94e61c1a4fec" (UID: "0b4cd71e-c237-4f98-a315-94e61c1a4fec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.409348 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b4cd71e-c237-4f98-a315-94e61c1a4fec-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.607812 4870 generic.go:334] "Generic (PLEG): container finished" podID="0b4cd71e-c237-4f98-a315-94e61c1a4fec" containerID="0ee47f7b5858777fc1f3752f4d4bf374de19d4753de103840ac6ff7ca57846f1" exitCode=0 Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.607873 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d8nrr" Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.607873 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d8nrr" event={"ID":"0b4cd71e-c237-4f98-a315-94e61c1a4fec","Type":"ContainerDied","Data":"0ee47f7b5858777fc1f3752f4d4bf374de19d4753de103840ac6ff7ca57846f1"} Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.608315 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d8nrr" event={"ID":"0b4cd71e-c237-4f98-a315-94e61c1a4fec","Type":"ContainerDied","Data":"e50c6cb8f3ea38413dbebdc063d3b75800835c0bd4f605f2cf3bb6fabc446a8f"} Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.608355 4870 scope.go:117] "RemoveContainer" containerID="0ee47f7b5858777fc1f3752f4d4bf374de19d4753de103840ac6ff7ca57846f1" Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.632694 4870 scope.go:117] "RemoveContainer" containerID="b45934f6f8f1fddae1373b803607e1bbf0588d4b3399d57f5e9fcb070cb9b8a9" Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.647386 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d8nrr"] Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.655998 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d8nrr"] Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.667591 4870 scope.go:117] "RemoveContainer" containerID="a22f674d6b8833e781a8b3c59e6d59b766ec377e27728f004ca09d3eb6552f8a" Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.706110 4870 scope.go:117] "RemoveContainer" containerID="0ee47f7b5858777fc1f3752f4d4bf374de19d4753de103840ac6ff7ca57846f1" Feb 16 17:31:17 crc kubenswrapper[4870]: E0216 17:31:17.709766 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ee47f7b5858777fc1f3752f4d4bf374de19d4753de103840ac6ff7ca57846f1\": container with ID starting with 0ee47f7b5858777fc1f3752f4d4bf374de19d4753de103840ac6ff7ca57846f1 not found: ID does not exist" containerID="0ee47f7b5858777fc1f3752f4d4bf374de19d4753de103840ac6ff7ca57846f1" Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.709822 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ee47f7b5858777fc1f3752f4d4bf374de19d4753de103840ac6ff7ca57846f1"} err="failed to get container status \"0ee47f7b5858777fc1f3752f4d4bf374de19d4753de103840ac6ff7ca57846f1\": rpc error: code = NotFound desc = could not find container \"0ee47f7b5858777fc1f3752f4d4bf374de19d4753de103840ac6ff7ca57846f1\": container with ID starting with 0ee47f7b5858777fc1f3752f4d4bf374de19d4753de103840ac6ff7ca57846f1 not found: ID does not exist" Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.709854 4870 scope.go:117] "RemoveContainer" containerID="b45934f6f8f1fddae1373b803607e1bbf0588d4b3399d57f5e9fcb070cb9b8a9" Feb 16 17:31:17 crc kubenswrapper[4870]: E0216 17:31:17.711192 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b45934f6f8f1fddae1373b803607e1bbf0588d4b3399d57f5e9fcb070cb9b8a9\": container with ID starting with b45934f6f8f1fddae1373b803607e1bbf0588d4b3399d57f5e9fcb070cb9b8a9 not found: ID does not exist" containerID="b45934f6f8f1fddae1373b803607e1bbf0588d4b3399d57f5e9fcb070cb9b8a9" Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.711233 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b45934f6f8f1fddae1373b803607e1bbf0588d4b3399d57f5e9fcb070cb9b8a9"} err="failed to get container status \"b45934f6f8f1fddae1373b803607e1bbf0588d4b3399d57f5e9fcb070cb9b8a9\": rpc error: code = NotFound desc = could not find container \"b45934f6f8f1fddae1373b803607e1bbf0588d4b3399d57f5e9fcb070cb9b8a9\": container with ID starting with b45934f6f8f1fddae1373b803607e1bbf0588d4b3399d57f5e9fcb070cb9b8a9 not found: ID does not exist" Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.711265 4870 scope.go:117] "RemoveContainer" containerID="a22f674d6b8833e781a8b3c59e6d59b766ec377e27728f004ca09d3eb6552f8a" Feb 16 17:31:17 crc kubenswrapper[4870]: E0216 17:31:17.711993 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a22f674d6b8833e781a8b3c59e6d59b766ec377e27728f004ca09d3eb6552f8a\": container with ID starting with a22f674d6b8833e781a8b3c59e6d59b766ec377e27728f004ca09d3eb6552f8a not found: ID does not exist" containerID="a22f674d6b8833e781a8b3c59e6d59b766ec377e27728f004ca09d3eb6552f8a" Feb 16 17:31:17 crc kubenswrapper[4870]: I0216 17:31:17.712058 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a22f674d6b8833e781a8b3c59e6d59b766ec377e27728f004ca09d3eb6552f8a"} err="failed to get container status \"a22f674d6b8833e781a8b3c59e6d59b766ec377e27728f004ca09d3eb6552f8a\": rpc error: code = NotFound desc = could not find container \"a22f674d6b8833e781a8b3c59e6d59b766ec377e27728f004ca09d3eb6552f8a\": container with ID starting with a22f674d6b8833e781a8b3c59e6d59b766ec377e27728f004ca09d3eb6552f8a not found: ID does not exist" Feb 16 17:31:18 crc kubenswrapper[4870]: I0216 17:31:18.236189 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b4cd71e-c237-4f98-a315-94e61c1a4fec" path="/var/lib/kubelet/pods/0b4cd71e-c237-4f98-a315-94e61c1a4fec/volumes" Feb 16 17:31:19 crc kubenswrapper[4870]: E0216 17:31:19.315848 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:31:19 crc kubenswrapper[4870]: E0216 17:31:19.316169 4870 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:31:19 crc kubenswrapper[4870]: E0216 17:31:19.316307 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7b2p7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6hkdm_openstack(34a86750-1fff-4add-8462-7ab805ec7f89): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:31:19 crc kubenswrapper[4870]: E0216 17:31:19.317763 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:31:24 crc kubenswrapper[4870]: I0216 17:31:24.051254 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-4mwgd"] Feb 16 17:31:24 crc kubenswrapper[4870]: I0216 17:31:24.068112 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-4mwgd"] Feb 16 17:31:24 crc kubenswrapper[4870]: I0216 17:31:24.243297 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9719dd82-cec9-4a56-ae93-29ccca75a3ef" path="/var/lib/kubelet/pods/9719dd82-cec9-4a56-ae93-29ccca75a3ef/volumes" Feb 16 17:31:33 crc kubenswrapper[4870]: E0216 17:31:33.224367 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:31:47 crc kubenswrapper[4870]: E0216 17:31:47.227266 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:31:59 crc kubenswrapper[4870]: E0216 17:31:59.226617 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:32:03 crc kubenswrapper[4870]: I0216 17:32:03.379626 4870 scope.go:117] "RemoveContainer" containerID="d99881c559e2858a8ad39267eb69f5c4df47aaf96e0282d2788304dd218584e2" Feb 16 17:32:03 crc kubenswrapper[4870]: I0216 17:32:03.423621 4870 scope.go:117] "RemoveContainer" containerID="9372cf886b3676a5f4ee950cb13b4255a9c9461c9d5001be205fec2bfb180c6f" Feb 16 17:32:03 crc kubenswrapper[4870]: I0216 17:32:03.466214 4870 scope.go:117] "RemoveContainer" containerID="957836a0ae9aac98f89a454b9eaf0bbd4596d3ba03ecffa8d4c32a70e7df8d08" Feb 16 17:32:03 crc kubenswrapper[4870]: I0216 17:32:03.503512 4870 scope.go:117] "RemoveContainer" containerID="9458b2d325f9566563ba2c68b0534d1b1d5b17072d854677ad3dea909e2e35f2" Feb 16 17:32:13 crc kubenswrapper[4870]: I0216 17:32:13.055160 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-4jdm2"] Feb 16 17:32:13 crc kubenswrapper[4870]: I0216 17:32:13.066976 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-vxqf6"] Feb 16 17:32:13 crc kubenswrapper[4870]: I0216 17:32:13.076390 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-jbd2m"] Feb 16 17:32:13 crc kubenswrapper[4870]: I0216 17:32:13.084747 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-663e-account-create-update-ndbzz"] Feb 16 17:32:13 crc kubenswrapper[4870]: I0216 17:32:13.093916 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-vxqf6"] Feb 16 17:32:13 crc kubenswrapper[4870]: I0216 17:32:13.102221 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-4jdm2"] Feb 16 17:32:13 crc kubenswrapper[4870]: I0216 17:32:13.110885 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-jbd2m"] Feb 16 17:32:13 crc kubenswrapper[4870]: I0216 17:32:13.119658 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-663e-account-create-update-ndbzz"] Feb 16 17:32:13 crc kubenswrapper[4870]: E0216 17:32:13.225223 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:32:14 crc kubenswrapper[4870]: I0216 17:32:14.037804 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-beba-account-create-update-br42d"] Feb 16 17:32:14 crc kubenswrapper[4870]: I0216 17:32:14.045762 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-da3e-account-create-update-q4dwp"] Feb 16 17:32:14 crc kubenswrapper[4870]: I0216 17:32:14.053412 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-beba-account-create-update-br42d"] Feb 16 17:32:14 crc kubenswrapper[4870]: I0216 17:32:14.060306 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-da3e-account-create-update-q4dwp"] Feb 16 17:32:14 crc kubenswrapper[4870]: I0216 17:32:14.234192 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22bac8e0-a77f-44bc-8011-3f676864a0e1" path="/var/lib/kubelet/pods/22bac8e0-a77f-44bc-8011-3f676864a0e1/volumes" Feb 16 17:32:14 crc kubenswrapper[4870]: I0216 17:32:14.234806 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a0c8689-ac35-4b83-99dc-bdda1ce12ec8" path="/var/lib/kubelet/pods/2a0c8689-ac35-4b83-99dc-bdda1ce12ec8/volumes" Feb 16 17:32:14 crc kubenswrapper[4870]: I0216 17:32:14.235379 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c223101-34a3-41b4-b6ce-5eb5d05692ac" path="/var/lib/kubelet/pods/9c223101-34a3-41b4-b6ce-5eb5d05692ac/volumes" Feb 16 17:32:14 crc kubenswrapper[4870]: I0216 17:32:14.236007 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8c81151-3dea-4340-821f-1e6d7df36926" path="/var/lib/kubelet/pods/d8c81151-3dea-4340-821f-1e6d7df36926/volumes" Feb 16 17:32:14 crc kubenswrapper[4870]: I0216 17:32:14.237057 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ece93484-8813-4d47-a826-9a8f66cd6d78" path="/var/lib/kubelet/pods/ece93484-8813-4d47-a826-9a8f66cd6d78/volumes" Feb 16 17:32:14 crc kubenswrapper[4870]: I0216 17:32:14.237658 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9a05c82-dcac-4d4c-8309-8c2d6389b31b" path="/var/lib/kubelet/pods/f9a05c82-dcac-4d4c-8309-8c2d6389b31b/volumes" Feb 16 17:32:26 crc kubenswrapper[4870]: E0216 17:32:26.225882 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:32:35 crc kubenswrapper[4870]: I0216 17:32:35.366686 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:32:35 crc kubenswrapper[4870]: I0216 17:32:35.367313 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:32:38 crc kubenswrapper[4870]: E0216 17:32:38.226783 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:32:44 crc kubenswrapper[4870]: I0216 17:32:44.062711 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-k452h"] Feb 16 17:32:44 crc kubenswrapper[4870]: I0216 17:32:44.072855 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-k452h"] Feb 16 17:32:44 crc kubenswrapper[4870]: I0216 17:32:44.235251 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="849da73d-204b-4434-aadb-cb79ab8aaca8" path="/var/lib/kubelet/pods/849da73d-204b-4434-aadb-cb79ab8aaca8/volumes" Feb 16 17:32:49 crc kubenswrapper[4870]: E0216 17:32:49.225860 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:33:03 crc kubenswrapper[4870]: E0216 17:33:03.228203 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:33:03 crc kubenswrapper[4870]: I0216 17:33:03.795826 4870 scope.go:117] "RemoveContainer" containerID="687f030256e15077d250143c31028556dd38e9d94629746cb353d0b2be032dc2" Feb 16 17:33:03 crc kubenswrapper[4870]: I0216 17:33:03.826291 4870 scope.go:117] "RemoveContainer" containerID="0a64b818f88149d581ddc32e571e47e79beac8074bb1cb6357b4389314c16a49" Feb 16 17:33:03 crc kubenswrapper[4870]: I0216 17:33:03.868712 4870 scope.go:117] "RemoveContainer" containerID="7fc263b14fed307298a467d214e3045c620ef37d81e34d537321be18fc49a3b1" Feb 16 17:33:03 crc kubenswrapper[4870]: I0216 17:33:03.913874 4870 scope.go:117] "RemoveContainer" containerID="ebf8cf98170dfc3a536285ada29bbd4953a1a82da68c25c10575bc65adc2fe98" Feb 16 17:33:03 crc kubenswrapper[4870]: I0216 17:33:03.962863 4870 scope.go:117] "RemoveContainer" containerID="0ca7c20f99426ba29cae5f3638ad4149c26ad729c07e82fbe78a38671dfc92ce" Feb 16 17:33:04 crc kubenswrapper[4870]: I0216 17:33:04.006228 4870 scope.go:117] "RemoveContainer" containerID="a5ebf26825258c524a9df6e63ad4652e66a39809bedd54e034ff01acf8cffcf3" Feb 16 17:33:04 crc kubenswrapper[4870]: I0216 17:33:04.057811 4870 scope.go:117] "RemoveContainer" containerID="ba409e796a259b013e0f7ce04c14b6f5a1b41e992f53ca3c2a6372de51626b07" Feb 16 17:33:05 crc kubenswrapper[4870]: I0216 17:33:05.366571 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:33:05 crc kubenswrapper[4870]: I0216 17:33:05.366914 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:33:08 crc kubenswrapper[4870]: I0216 17:33:08.056731 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-7n7cd"] Feb 16 17:33:08 crc kubenswrapper[4870]: I0216 17:33:08.071396 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-7n7cd"] Feb 16 17:33:08 crc kubenswrapper[4870]: I0216 17:33:08.239810 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2940d957-d580-4cea-8476-ace5524d8af3" path="/var/lib/kubelet/pods/2940d957-d580-4cea-8476-ace5524d8af3/volumes" Feb 16 17:33:09 crc kubenswrapper[4870]: I0216 17:33:09.056046 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fhbsn"] Feb 16 17:33:09 crc kubenswrapper[4870]: I0216 17:33:09.072210 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fhbsn"] Feb 16 17:33:10 crc kubenswrapper[4870]: I0216 17:33:10.234665 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6519f0c-2a4f-4712-b3f3-92effbbcec1d" path="/var/lib/kubelet/pods/d6519f0c-2a4f-4712-b3f3-92effbbcec1d/volumes" Feb 16 17:33:17 crc kubenswrapper[4870]: E0216 17:33:17.226398 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:33:32 crc kubenswrapper[4870]: E0216 17:33:32.224741 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:33:35 crc kubenswrapper[4870]: I0216 17:33:35.367181 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:33:35 crc kubenswrapper[4870]: I0216 17:33:35.368195 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:33:35 crc kubenswrapper[4870]: I0216 17:33:35.368274 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:33:35 crc kubenswrapper[4870]: I0216 17:33:35.369737 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5480579f5b413919e2674eb7481044b2c4a41b4dfb3d9d951686496ecef4edf6"} pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:33:35 crc kubenswrapper[4870]: I0216 17:33:35.369828 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" containerID="cri-o://5480579f5b413919e2674eb7481044b2c4a41b4dfb3d9d951686496ecef4edf6" gracePeriod=600 Feb 16 17:33:36 crc kubenswrapper[4870]: I0216 17:33:36.162317 4870 generic.go:334] "Generic (PLEG): container finished" podID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerID="5480579f5b413919e2674eb7481044b2c4a41b4dfb3d9d951686496ecef4edf6" exitCode=0 Feb 16 17:33:36 crc kubenswrapper[4870]: I0216 17:33:36.162610 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerDied","Data":"5480579f5b413919e2674eb7481044b2c4a41b4dfb3d9d951686496ecef4edf6"} Feb 16 17:33:36 crc kubenswrapper[4870]: I0216 17:33:36.162641 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerStarted","Data":"dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c"} Feb 16 17:33:36 crc kubenswrapper[4870]: I0216 17:33:36.162695 4870 scope.go:117] "RemoveContainer" containerID="c079bc40566686542af9724e0ff758feb2f26aebe7151b14f56a09ca014331c6" Feb 16 17:33:47 crc kubenswrapper[4870]: E0216 17:33:47.225556 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:33:53 crc kubenswrapper[4870]: I0216 17:33:53.047632 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-qqqc5"] Feb 16 17:33:53 crc kubenswrapper[4870]: I0216 17:33:53.059848 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-qqqc5"] Feb 16 17:33:54 crc kubenswrapper[4870]: I0216 17:33:54.234307 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dd7a33-0622-468a-b385-e66dec6d559e" path="/var/lib/kubelet/pods/92dd7a33-0622-468a-b385-e66dec6d559e/volumes" Feb 16 17:33:58 crc kubenswrapper[4870]: E0216 17:33:58.229265 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:34:04 crc kubenswrapper[4870]: I0216 17:34:04.204258 4870 scope.go:117] "RemoveContainer" containerID="78af0f0b774cce6a4dc1d1cf7215a91cd4adae279a40618cee639a3bf30b79a1" Feb 16 17:34:04 crc kubenswrapper[4870]: I0216 17:34:04.237620 4870 scope.go:117] "RemoveContainer" containerID="4d918c8df808253acefa7999840f6875d7fe2262246e00a6fc68afeb3760040d" Feb 16 17:34:04 crc kubenswrapper[4870]: I0216 17:34:04.305748 4870 scope.go:117] "RemoveContainer" containerID="78a154cfc26923be8ac8821860922e406c1b50690f786064b07517d494f302b9" Feb 16 17:34:13 crc kubenswrapper[4870]: E0216 17:34:13.225350 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:34:26 crc kubenswrapper[4870]: E0216 17:34:26.233137 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:34:37 crc kubenswrapper[4870]: E0216 17:34:37.225199 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:34:51 crc kubenswrapper[4870]: E0216 17:34:51.224930 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:35:02 crc kubenswrapper[4870]: E0216 17:35:02.225048 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:35:15 crc kubenswrapper[4870]: E0216 17:35:15.225176 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:35:30 crc kubenswrapper[4870]: E0216 17:35:30.226543 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:35:35 crc kubenswrapper[4870]: I0216 17:35:35.367178 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:35:35 crc kubenswrapper[4870]: I0216 17:35:35.369002 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:35:41 crc kubenswrapper[4870]: E0216 17:35:41.225222 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:35:54 crc kubenswrapper[4870]: E0216 17:35:54.229094 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:36:05 crc kubenswrapper[4870]: I0216 17:36:05.366974 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:36:05 crc kubenswrapper[4870]: I0216 17:36:05.367548 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:36:09 crc kubenswrapper[4870]: E0216 17:36:09.225679 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:36:23 crc kubenswrapper[4870]: I0216 17:36:23.225031 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:36:23 crc kubenswrapper[4870]: E0216 17:36:23.347751 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:36:23 crc kubenswrapper[4870]: E0216 17:36:23.347985 4870 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:36:23 crc kubenswrapper[4870]: E0216 17:36:23.348195 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7b2p7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6hkdm_openstack(34a86750-1fff-4add-8462-7ab805ec7f89): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:36:23 crc kubenswrapper[4870]: E0216 17:36:23.349791 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:36:35 crc kubenswrapper[4870]: E0216 17:36:35.226680 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:36:35 crc kubenswrapper[4870]: I0216 17:36:35.366978 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:36:35 crc kubenswrapper[4870]: I0216 17:36:35.367070 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:36:35 crc kubenswrapper[4870]: I0216 17:36:35.367129 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:36:35 crc kubenswrapper[4870]: I0216 17:36:35.368298 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c"} pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:36:35 crc kubenswrapper[4870]: I0216 17:36:35.368447 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" containerID="cri-o://dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" gracePeriod=600 Feb 16 17:36:35 crc kubenswrapper[4870]: E0216 17:36:35.491292 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:36:35 crc kubenswrapper[4870]: I0216 17:36:35.990583 4870 generic.go:334] "Generic (PLEG): container finished" podID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" exitCode=0 Feb 16 17:36:35 crc kubenswrapper[4870]: I0216 17:36:35.990669 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerDied","Data":"dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c"} Feb 16 17:36:35 crc kubenswrapper[4870]: I0216 17:36:35.990993 4870 scope.go:117] "RemoveContainer" containerID="5480579f5b413919e2674eb7481044b2c4a41b4dfb3d9d951686496ecef4edf6" Feb 16 17:36:35 crc kubenswrapper[4870]: I0216 17:36:35.991446 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:36:35 crc kubenswrapper[4870]: E0216 17:36:35.991765 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:36:47 crc kubenswrapper[4870]: I0216 17:36:47.224770 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d6kpl"] Feb 16 17:36:47 crc kubenswrapper[4870]: E0216 17:36:47.225928 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0cb5107-99f1-4ecd-9172-90085dda2643" containerName="extract-content" Feb 16 17:36:47 crc kubenswrapper[4870]: I0216 17:36:47.226100 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0cb5107-99f1-4ecd-9172-90085dda2643" containerName="extract-content" Feb 16 17:36:47 crc kubenswrapper[4870]: E0216 17:36:47.226146 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0cb5107-99f1-4ecd-9172-90085dda2643" containerName="extract-utilities" Feb 16 17:36:47 crc kubenswrapper[4870]: I0216 17:36:47.226155 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0cb5107-99f1-4ecd-9172-90085dda2643" containerName="extract-utilities" Feb 16 17:36:47 crc kubenswrapper[4870]: E0216 17:36:47.226169 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0cb5107-99f1-4ecd-9172-90085dda2643" containerName="registry-server" Feb 16 17:36:47 crc kubenswrapper[4870]: I0216 17:36:47.226179 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0cb5107-99f1-4ecd-9172-90085dda2643" containerName="registry-server" Feb 16 17:36:47 crc kubenswrapper[4870]: E0216 17:36:47.226193 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b4cd71e-c237-4f98-a315-94e61c1a4fec" containerName="extract-utilities" Feb 16 17:36:47 crc kubenswrapper[4870]: I0216 17:36:47.226202 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b4cd71e-c237-4f98-a315-94e61c1a4fec" containerName="extract-utilities" Feb 16 17:36:47 crc kubenswrapper[4870]: E0216 17:36:47.226219 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b4cd71e-c237-4f98-a315-94e61c1a4fec" containerName="extract-content" Feb 16 17:36:47 crc kubenswrapper[4870]: I0216 17:36:47.226228 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b4cd71e-c237-4f98-a315-94e61c1a4fec" containerName="extract-content" Feb 16 17:36:47 crc kubenswrapper[4870]: E0216 17:36:47.226240 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b4cd71e-c237-4f98-a315-94e61c1a4fec" containerName="registry-server" Feb 16 17:36:47 crc kubenswrapper[4870]: I0216 17:36:47.226247 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b4cd71e-c237-4f98-a315-94e61c1a4fec" containerName="registry-server" Feb 16 17:36:47 crc kubenswrapper[4870]: I0216 17:36:47.226507 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b4cd71e-c237-4f98-a315-94e61c1a4fec" containerName="registry-server" Feb 16 17:36:47 crc kubenswrapper[4870]: I0216 17:36:47.226537 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0cb5107-99f1-4ecd-9172-90085dda2643" containerName="registry-server" Feb 16 17:36:47 crc kubenswrapper[4870]: I0216 17:36:47.228293 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6kpl" Feb 16 17:36:47 crc kubenswrapper[4870]: I0216 17:36:47.245155 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6kpl"] Feb 16 17:36:47 crc kubenswrapper[4870]: I0216 17:36:47.285422 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/251d1c1b-d622-4e06-abde-17a42510ff7a-catalog-content\") pod \"redhat-marketplace-d6kpl\" (UID: \"251d1c1b-d622-4e06-abde-17a42510ff7a\") " pod="openshift-marketplace/redhat-marketplace-d6kpl" Feb 16 17:36:47 crc kubenswrapper[4870]: I0216 17:36:47.285724 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6spz\" (UniqueName: \"kubernetes.io/projected/251d1c1b-d622-4e06-abde-17a42510ff7a-kube-api-access-b6spz\") pod \"redhat-marketplace-d6kpl\" (UID: \"251d1c1b-d622-4e06-abde-17a42510ff7a\") " pod="openshift-marketplace/redhat-marketplace-d6kpl" Feb 16 17:36:47 crc kubenswrapper[4870]: I0216 17:36:47.285972 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/251d1c1b-d622-4e06-abde-17a42510ff7a-utilities\") pod \"redhat-marketplace-d6kpl\" (UID: \"251d1c1b-d622-4e06-abde-17a42510ff7a\") " pod="openshift-marketplace/redhat-marketplace-d6kpl" Feb 16 17:36:47 crc kubenswrapper[4870]: I0216 17:36:47.387467 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6spz\" (UniqueName: \"kubernetes.io/projected/251d1c1b-d622-4e06-abde-17a42510ff7a-kube-api-access-b6spz\") pod \"redhat-marketplace-d6kpl\" (UID: \"251d1c1b-d622-4e06-abde-17a42510ff7a\") " pod="openshift-marketplace/redhat-marketplace-d6kpl" Feb 16 17:36:47 crc kubenswrapper[4870]: I0216 17:36:47.387618 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/251d1c1b-d622-4e06-abde-17a42510ff7a-utilities\") pod \"redhat-marketplace-d6kpl\" (UID: \"251d1c1b-d622-4e06-abde-17a42510ff7a\") " pod="openshift-marketplace/redhat-marketplace-d6kpl" Feb 16 17:36:47 crc kubenswrapper[4870]: I0216 17:36:47.387663 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/251d1c1b-d622-4e06-abde-17a42510ff7a-catalog-content\") pod \"redhat-marketplace-d6kpl\" (UID: \"251d1c1b-d622-4e06-abde-17a42510ff7a\") " pod="openshift-marketplace/redhat-marketplace-d6kpl" Feb 16 17:36:47 crc kubenswrapper[4870]: I0216 17:36:47.388350 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/251d1c1b-d622-4e06-abde-17a42510ff7a-utilities\") pod \"redhat-marketplace-d6kpl\" (UID: \"251d1c1b-d622-4e06-abde-17a42510ff7a\") " pod="openshift-marketplace/redhat-marketplace-d6kpl" Feb 16 17:36:47 crc kubenswrapper[4870]: I0216 17:36:47.388405 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/251d1c1b-d622-4e06-abde-17a42510ff7a-catalog-content\") pod \"redhat-marketplace-d6kpl\" (UID: \"251d1c1b-d622-4e06-abde-17a42510ff7a\") " pod="openshift-marketplace/redhat-marketplace-d6kpl" Feb 16 17:36:47 crc kubenswrapper[4870]: I0216 17:36:47.426197 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6spz\" (UniqueName: \"kubernetes.io/projected/251d1c1b-d622-4e06-abde-17a42510ff7a-kube-api-access-b6spz\") pod \"redhat-marketplace-d6kpl\" (UID: \"251d1c1b-d622-4e06-abde-17a42510ff7a\") " pod="openshift-marketplace/redhat-marketplace-d6kpl" Feb 16 17:36:47 crc kubenswrapper[4870]: I0216 17:36:47.580873 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6kpl" Feb 16 17:36:48 crc kubenswrapper[4870]: I0216 17:36:48.099752 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6kpl"] Feb 16 17:36:48 crc kubenswrapper[4870]: W0216 17:36:48.106149 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod251d1c1b_d622_4e06_abde_17a42510ff7a.slice/crio-a001cc9821c17f9657937f58898799d268256ee50ca0c0cbc8ee85f8002e00e5 WatchSource:0}: Error finding container a001cc9821c17f9657937f58898799d268256ee50ca0c0cbc8ee85f8002e00e5: Status 404 returned error can't find the container with id a001cc9821c17f9657937f58898799d268256ee50ca0c0cbc8ee85f8002e00e5 Feb 16 17:36:48 crc kubenswrapper[4870]: I0216 17:36:48.115244 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6kpl" event={"ID":"251d1c1b-d622-4e06-abde-17a42510ff7a","Type":"ContainerStarted","Data":"a001cc9821c17f9657937f58898799d268256ee50ca0c0cbc8ee85f8002e00e5"} Feb 16 17:36:49 crc kubenswrapper[4870]: I0216 17:36:49.127525 4870 generic.go:334] "Generic (PLEG): container finished" podID="251d1c1b-d622-4e06-abde-17a42510ff7a" containerID="b9a3f58a0382bf6f75deeb74e0fe042a555177b57630639c0abec2b3260deeac" exitCode=0 Feb 16 17:36:49 crc kubenswrapper[4870]: I0216 17:36:49.127749 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6kpl" event={"ID":"251d1c1b-d622-4e06-abde-17a42510ff7a","Type":"ContainerDied","Data":"b9a3f58a0382bf6f75deeb74e0fe042a555177b57630639c0abec2b3260deeac"} Feb 16 17:36:50 crc kubenswrapper[4870]: E0216 17:36:50.226198 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:36:51 crc kubenswrapper[4870]: I0216 17:36:51.152789 4870 generic.go:334] "Generic (PLEG): container finished" podID="251d1c1b-d622-4e06-abde-17a42510ff7a" containerID="3d397933c85c2ce09049c5122368cd45009f582165acff8c255ee9ae1bbea147" exitCode=0 Feb 16 17:36:51 crc kubenswrapper[4870]: I0216 17:36:51.154244 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6kpl" event={"ID":"251d1c1b-d622-4e06-abde-17a42510ff7a","Type":"ContainerDied","Data":"3d397933c85c2ce09049c5122368cd45009f582165acff8c255ee9ae1bbea147"} Feb 16 17:36:51 crc kubenswrapper[4870]: I0216 17:36:51.226721 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:36:51 crc kubenswrapper[4870]: E0216 17:36:51.227108 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:36:52 crc kubenswrapper[4870]: I0216 17:36:52.166672 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6kpl" event={"ID":"251d1c1b-d622-4e06-abde-17a42510ff7a","Type":"ContainerStarted","Data":"9233bfcb4cb1b026016b4969099a6b1cf101fcd4bbc4e4dcecd919e30002d337"} Feb 16 17:36:52 crc kubenswrapper[4870]: I0216 17:36:52.187105 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d6kpl" podStartSLOduration=2.76105927 podStartE2EDuration="5.187086694s" podCreationTimestamp="2026-02-16 17:36:47 +0000 UTC" firstStartedPulling="2026-02-16 17:36:49.12966989 +0000 UTC m=+2213.613134284" lastFinishedPulling="2026-02-16 17:36:51.555697304 +0000 UTC m=+2216.039161708" observedRunningTime="2026-02-16 17:36:52.183570954 +0000 UTC m=+2216.667035338" watchObservedRunningTime="2026-02-16 17:36:52.187086694 +0000 UTC m=+2216.670551078" Feb 16 17:36:57 crc kubenswrapper[4870]: I0216 17:36:57.581937 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d6kpl" Feb 16 17:36:57 crc kubenswrapper[4870]: I0216 17:36:57.582354 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d6kpl" Feb 16 17:36:57 crc kubenswrapper[4870]: I0216 17:36:57.663196 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d6kpl" Feb 16 17:36:58 crc kubenswrapper[4870]: I0216 17:36:58.286968 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d6kpl" Feb 16 17:36:58 crc kubenswrapper[4870]: I0216 17:36:58.344442 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6kpl"] Feb 16 17:37:00 crc kubenswrapper[4870]: I0216 17:37:00.238577 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d6kpl" podUID="251d1c1b-d622-4e06-abde-17a42510ff7a" containerName="registry-server" containerID="cri-o://9233bfcb4cb1b026016b4969099a6b1cf101fcd4bbc4e4dcecd919e30002d337" gracePeriod=2 Feb 16 17:37:00 crc kubenswrapper[4870]: E0216 17:37:00.471395 4870 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod251d1c1b_d622_4e06_abde_17a42510ff7a.slice/crio-9233bfcb4cb1b026016b4969099a6b1cf101fcd4bbc4e4dcecd919e30002d337.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod251d1c1b_d622_4e06_abde_17a42510ff7a.slice/crio-conmon-9233bfcb4cb1b026016b4969099a6b1cf101fcd4bbc4e4dcecd919e30002d337.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:37:00 crc kubenswrapper[4870]: I0216 17:37:00.745917 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6kpl" Feb 16 17:37:00 crc kubenswrapper[4870]: I0216 17:37:00.864369 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/251d1c1b-d622-4e06-abde-17a42510ff7a-catalog-content\") pod \"251d1c1b-d622-4e06-abde-17a42510ff7a\" (UID: \"251d1c1b-d622-4e06-abde-17a42510ff7a\") " Feb 16 17:37:00 crc kubenswrapper[4870]: I0216 17:37:00.864488 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/251d1c1b-d622-4e06-abde-17a42510ff7a-utilities\") pod \"251d1c1b-d622-4e06-abde-17a42510ff7a\" (UID: \"251d1c1b-d622-4e06-abde-17a42510ff7a\") " Feb 16 17:37:00 crc kubenswrapper[4870]: I0216 17:37:00.864668 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6spz\" (UniqueName: \"kubernetes.io/projected/251d1c1b-d622-4e06-abde-17a42510ff7a-kube-api-access-b6spz\") pod \"251d1c1b-d622-4e06-abde-17a42510ff7a\" (UID: \"251d1c1b-d622-4e06-abde-17a42510ff7a\") " Feb 16 17:37:00 crc kubenswrapper[4870]: I0216 17:37:00.865683 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/251d1c1b-d622-4e06-abde-17a42510ff7a-utilities" (OuterVolumeSpecName: "utilities") pod "251d1c1b-d622-4e06-abde-17a42510ff7a" (UID: "251d1c1b-d622-4e06-abde-17a42510ff7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:37:00 crc kubenswrapper[4870]: I0216 17:37:00.870077 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/251d1c1b-d622-4e06-abde-17a42510ff7a-kube-api-access-b6spz" (OuterVolumeSpecName: "kube-api-access-b6spz") pod "251d1c1b-d622-4e06-abde-17a42510ff7a" (UID: "251d1c1b-d622-4e06-abde-17a42510ff7a"). InnerVolumeSpecName "kube-api-access-b6spz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:37:00 crc kubenswrapper[4870]: I0216 17:37:00.898359 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/251d1c1b-d622-4e06-abde-17a42510ff7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "251d1c1b-d622-4e06-abde-17a42510ff7a" (UID: "251d1c1b-d622-4e06-abde-17a42510ff7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:37:00 crc kubenswrapper[4870]: I0216 17:37:00.967483 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/251d1c1b-d622-4e06-abde-17a42510ff7a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:37:00 crc kubenswrapper[4870]: I0216 17:37:00.967529 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/251d1c1b-d622-4e06-abde-17a42510ff7a-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:37:00 crc kubenswrapper[4870]: I0216 17:37:00.967542 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6spz\" (UniqueName: \"kubernetes.io/projected/251d1c1b-d622-4e06-abde-17a42510ff7a-kube-api-access-b6spz\") on node \"crc\" DevicePath \"\"" Feb 16 17:37:01 crc kubenswrapper[4870]: I0216 17:37:01.250403 4870 generic.go:334] "Generic (PLEG): container finished" podID="251d1c1b-d622-4e06-abde-17a42510ff7a" containerID="9233bfcb4cb1b026016b4969099a6b1cf101fcd4bbc4e4dcecd919e30002d337" exitCode=0 Feb 16 17:37:01 crc kubenswrapper[4870]: I0216 17:37:01.250484 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6kpl" Feb 16 17:37:01 crc kubenswrapper[4870]: I0216 17:37:01.250518 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6kpl" event={"ID":"251d1c1b-d622-4e06-abde-17a42510ff7a","Type":"ContainerDied","Data":"9233bfcb4cb1b026016b4969099a6b1cf101fcd4bbc4e4dcecd919e30002d337"} Feb 16 17:37:01 crc kubenswrapper[4870]: I0216 17:37:01.250818 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6kpl" event={"ID":"251d1c1b-d622-4e06-abde-17a42510ff7a","Type":"ContainerDied","Data":"a001cc9821c17f9657937f58898799d268256ee50ca0c0cbc8ee85f8002e00e5"} Feb 16 17:37:01 crc kubenswrapper[4870]: I0216 17:37:01.250850 4870 scope.go:117] "RemoveContainer" containerID="9233bfcb4cb1b026016b4969099a6b1cf101fcd4bbc4e4dcecd919e30002d337" Feb 16 17:37:01 crc kubenswrapper[4870]: I0216 17:37:01.289324 4870 scope.go:117] "RemoveContainer" containerID="3d397933c85c2ce09049c5122368cd45009f582165acff8c255ee9ae1bbea147" Feb 16 17:37:01 crc kubenswrapper[4870]: I0216 17:37:01.299253 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6kpl"] Feb 16 17:37:01 crc kubenswrapper[4870]: I0216 17:37:01.310173 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6kpl"] Feb 16 17:37:01 crc kubenswrapper[4870]: I0216 17:37:01.325345 4870 scope.go:117] "RemoveContainer" containerID="b9a3f58a0382bf6f75deeb74e0fe042a555177b57630639c0abec2b3260deeac" Feb 16 17:37:01 crc kubenswrapper[4870]: I0216 17:37:01.400118 4870 scope.go:117] "RemoveContainer" containerID="9233bfcb4cb1b026016b4969099a6b1cf101fcd4bbc4e4dcecd919e30002d337" Feb 16 17:37:01 crc kubenswrapper[4870]: E0216 17:37:01.404801 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9233bfcb4cb1b026016b4969099a6b1cf101fcd4bbc4e4dcecd919e30002d337\": container with ID starting with 9233bfcb4cb1b026016b4969099a6b1cf101fcd4bbc4e4dcecd919e30002d337 not found: ID does not exist" containerID="9233bfcb4cb1b026016b4969099a6b1cf101fcd4bbc4e4dcecd919e30002d337" Feb 16 17:37:01 crc kubenswrapper[4870]: I0216 17:37:01.404888 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9233bfcb4cb1b026016b4969099a6b1cf101fcd4bbc4e4dcecd919e30002d337"} err="failed to get container status \"9233bfcb4cb1b026016b4969099a6b1cf101fcd4bbc4e4dcecd919e30002d337\": rpc error: code = NotFound desc = could not find container \"9233bfcb4cb1b026016b4969099a6b1cf101fcd4bbc4e4dcecd919e30002d337\": container with ID starting with 9233bfcb4cb1b026016b4969099a6b1cf101fcd4bbc4e4dcecd919e30002d337 not found: ID does not exist" Feb 16 17:37:01 crc kubenswrapper[4870]: I0216 17:37:01.404967 4870 scope.go:117] "RemoveContainer" containerID="3d397933c85c2ce09049c5122368cd45009f582165acff8c255ee9ae1bbea147" Feb 16 17:37:01 crc kubenswrapper[4870]: E0216 17:37:01.406151 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d397933c85c2ce09049c5122368cd45009f582165acff8c255ee9ae1bbea147\": container with ID starting with 3d397933c85c2ce09049c5122368cd45009f582165acff8c255ee9ae1bbea147 not found: ID does not exist" containerID="3d397933c85c2ce09049c5122368cd45009f582165acff8c255ee9ae1bbea147" Feb 16 17:37:01 crc kubenswrapper[4870]: I0216 17:37:01.406266 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d397933c85c2ce09049c5122368cd45009f582165acff8c255ee9ae1bbea147"} err="failed to get container status \"3d397933c85c2ce09049c5122368cd45009f582165acff8c255ee9ae1bbea147\": rpc error: code = NotFound desc = could not find container \"3d397933c85c2ce09049c5122368cd45009f582165acff8c255ee9ae1bbea147\": container with ID starting with 3d397933c85c2ce09049c5122368cd45009f582165acff8c255ee9ae1bbea147 not found: ID does not exist" Feb 16 17:37:01 crc kubenswrapper[4870]: I0216 17:37:01.406353 4870 scope.go:117] "RemoveContainer" containerID="b9a3f58a0382bf6f75deeb74e0fe042a555177b57630639c0abec2b3260deeac" Feb 16 17:37:01 crc kubenswrapper[4870]: E0216 17:37:01.407695 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9a3f58a0382bf6f75deeb74e0fe042a555177b57630639c0abec2b3260deeac\": container with ID starting with b9a3f58a0382bf6f75deeb74e0fe042a555177b57630639c0abec2b3260deeac not found: ID does not exist" containerID="b9a3f58a0382bf6f75deeb74e0fe042a555177b57630639c0abec2b3260deeac" Feb 16 17:37:01 crc kubenswrapper[4870]: I0216 17:37:01.407737 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9a3f58a0382bf6f75deeb74e0fe042a555177b57630639c0abec2b3260deeac"} err="failed to get container status \"b9a3f58a0382bf6f75deeb74e0fe042a555177b57630639c0abec2b3260deeac\": rpc error: code = NotFound desc = could not find container \"b9a3f58a0382bf6f75deeb74e0fe042a555177b57630639c0abec2b3260deeac\": container with ID starting with b9a3f58a0382bf6f75deeb74e0fe042a555177b57630639c0abec2b3260deeac not found: ID does not exist" Feb 16 17:37:02 crc kubenswrapper[4870]: I0216 17:37:02.224902 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:37:02 crc kubenswrapper[4870]: E0216 17:37:02.226026 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:37:02 crc kubenswrapper[4870]: E0216 17:37:02.226713 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:37:02 crc kubenswrapper[4870]: I0216 17:37:02.239905 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="251d1c1b-d622-4e06-abde-17a42510ff7a" path="/var/lib/kubelet/pods/251d1c1b-d622-4e06-abde-17a42510ff7a/volumes" Feb 16 17:37:13 crc kubenswrapper[4870]: E0216 17:37:13.225480 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:37:15 crc kubenswrapper[4870]: I0216 17:37:15.223448 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:37:15 crc kubenswrapper[4870]: E0216 17:37:15.224110 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:37:26 crc kubenswrapper[4870]: I0216 17:37:26.233199 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:37:26 crc kubenswrapper[4870]: E0216 17:37:26.235419 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:37:28 crc kubenswrapper[4870]: E0216 17:37:28.225520 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:37:38 crc kubenswrapper[4870]: I0216 17:37:38.223385 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:37:38 crc kubenswrapper[4870]: E0216 17:37:38.224144 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:37:41 crc kubenswrapper[4870]: E0216 17:37:41.225240 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:37:41 crc kubenswrapper[4870]: I0216 17:37:41.975087 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d7kzc"] Feb 16 17:37:41 crc kubenswrapper[4870]: E0216 17:37:41.975819 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="251d1c1b-d622-4e06-abde-17a42510ff7a" containerName="registry-server" Feb 16 17:37:41 crc kubenswrapper[4870]: I0216 17:37:41.975839 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="251d1c1b-d622-4e06-abde-17a42510ff7a" containerName="registry-server" Feb 16 17:37:41 crc kubenswrapper[4870]: E0216 17:37:41.975853 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="251d1c1b-d622-4e06-abde-17a42510ff7a" containerName="extract-content" Feb 16 17:37:41 crc kubenswrapper[4870]: I0216 17:37:41.975859 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="251d1c1b-d622-4e06-abde-17a42510ff7a" containerName="extract-content" Feb 16 17:37:41 crc kubenswrapper[4870]: E0216 17:37:41.975885 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="251d1c1b-d622-4e06-abde-17a42510ff7a" containerName="extract-utilities" Feb 16 17:37:41 crc kubenswrapper[4870]: I0216 17:37:41.975892 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="251d1c1b-d622-4e06-abde-17a42510ff7a" containerName="extract-utilities" Feb 16 17:37:41 crc kubenswrapper[4870]: I0216 17:37:41.976143 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="251d1c1b-d622-4e06-abde-17a42510ff7a" containerName="registry-server" Feb 16 17:37:41 crc kubenswrapper[4870]: I0216 17:37:41.977649 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7kzc" Feb 16 17:37:41 crc kubenswrapper[4870]: I0216 17:37:41.991741 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d7kzc"] Feb 16 17:37:42 crc kubenswrapper[4870]: I0216 17:37:42.117440 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrw86\" (UniqueName: \"kubernetes.io/projected/01e5d5b9-01ec-4e85-910d-26a8ca382930-kube-api-access-lrw86\") pod \"certified-operators-d7kzc\" (UID: \"01e5d5b9-01ec-4e85-910d-26a8ca382930\") " pod="openshift-marketplace/certified-operators-d7kzc" Feb 16 17:37:42 crc kubenswrapper[4870]: I0216 17:37:42.117603 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01e5d5b9-01ec-4e85-910d-26a8ca382930-utilities\") pod \"certified-operators-d7kzc\" (UID: \"01e5d5b9-01ec-4e85-910d-26a8ca382930\") " pod="openshift-marketplace/certified-operators-d7kzc" Feb 16 17:37:42 crc kubenswrapper[4870]: I0216 17:37:42.117798 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01e5d5b9-01ec-4e85-910d-26a8ca382930-catalog-content\") pod \"certified-operators-d7kzc\" (UID: \"01e5d5b9-01ec-4e85-910d-26a8ca382930\") " pod="openshift-marketplace/certified-operators-d7kzc" Feb 16 17:37:42 crc kubenswrapper[4870]: I0216 17:37:42.220478 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01e5d5b9-01ec-4e85-910d-26a8ca382930-catalog-content\") pod \"certified-operators-d7kzc\" (UID: \"01e5d5b9-01ec-4e85-910d-26a8ca382930\") " pod="openshift-marketplace/certified-operators-d7kzc" Feb 16 17:37:42 crc kubenswrapper[4870]: I0216 17:37:42.220699 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrw86\" (UniqueName: \"kubernetes.io/projected/01e5d5b9-01ec-4e85-910d-26a8ca382930-kube-api-access-lrw86\") pod \"certified-operators-d7kzc\" (UID: \"01e5d5b9-01ec-4e85-910d-26a8ca382930\") " pod="openshift-marketplace/certified-operators-d7kzc" Feb 16 17:37:42 crc kubenswrapper[4870]: I0216 17:37:42.220737 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01e5d5b9-01ec-4e85-910d-26a8ca382930-utilities\") pod \"certified-operators-d7kzc\" (UID: \"01e5d5b9-01ec-4e85-910d-26a8ca382930\") " pod="openshift-marketplace/certified-operators-d7kzc" Feb 16 17:37:42 crc kubenswrapper[4870]: I0216 17:37:42.221199 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01e5d5b9-01ec-4e85-910d-26a8ca382930-catalog-content\") pod \"certified-operators-d7kzc\" (UID: \"01e5d5b9-01ec-4e85-910d-26a8ca382930\") " pod="openshift-marketplace/certified-operators-d7kzc" Feb 16 17:37:42 crc kubenswrapper[4870]: I0216 17:37:42.221229 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01e5d5b9-01ec-4e85-910d-26a8ca382930-utilities\") pod \"certified-operators-d7kzc\" (UID: \"01e5d5b9-01ec-4e85-910d-26a8ca382930\") " pod="openshift-marketplace/certified-operators-d7kzc" Feb 16 17:37:42 crc kubenswrapper[4870]: I0216 17:37:42.242759 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrw86\" (UniqueName: \"kubernetes.io/projected/01e5d5b9-01ec-4e85-910d-26a8ca382930-kube-api-access-lrw86\") pod \"certified-operators-d7kzc\" (UID: \"01e5d5b9-01ec-4e85-910d-26a8ca382930\") " pod="openshift-marketplace/certified-operators-d7kzc" Feb 16 17:37:42 crc kubenswrapper[4870]: I0216 17:37:42.314347 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7kzc" Feb 16 17:37:42 crc kubenswrapper[4870]: I0216 17:37:42.865338 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d7kzc"] Feb 16 17:37:43 crc kubenswrapper[4870]: I0216 17:37:43.692717 4870 generic.go:334] "Generic (PLEG): container finished" podID="01e5d5b9-01ec-4e85-910d-26a8ca382930" containerID="7f4cc1a72930bb39ed6b0c71a6563796f7cfa46b6821de3ca4e60c0f71cdc3dc" exitCode=0 Feb 16 17:37:43 crc kubenswrapper[4870]: I0216 17:37:43.692911 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7kzc" event={"ID":"01e5d5b9-01ec-4e85-910d-26a8ca382930","Type":"ContainerDied","Data":"7f4cc1a72930bb39ed6b0c71a6563796f7cfa46b6821de3ca4e60c0f71cdc3dc"} Feb 16 17:37:43 crc kubenswrapper[4870]: I0216 17:37:43.693059 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7kzc" event={"ID":"01e5d5b9-01ec-4e85-910d-26a8ca382930","Type":"ContainerStarted","Data":"75404400e8dd5f9db5f5569e29ec71c9ff8f986e2ed96e1dc0f8ac10c2bba532"} Feb 16 17:37:47 crc kubenswrapper[4870]: I0216 17:37:47.729436 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7kzc" event={"ID":"01e5d5b9-01ec-4e85-910d-26a8ca382930","Type":"ContainerStarted","Data":"a504c8aba35786c2c067dc79acf5021b07c8fedfcfa4f29f7cc573118b8ad211"} Feb 16 17:37:48 crc kubenswrapper[4870]: I0216 17:37:48.744139 4870 generic.go:334] "Generic (PLEG): container finished" podID="01e5d5b9-01ec-4e85-910d-26a8ca382930" containerID="a504c8aba35786c2c067dc79acf5021b07c8fedfcfa4f29f7cc573118b8ad211" exitCode=0 Feb 16 17:37:48 crc kubenswrapper[4870]: I0216 17:37:48.744258 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7kzc" event={"ID":"01e5d5b9-01ec-4e85-910d-26a8ca382930","Type":"ContainerDied","Data":"a504c8aba35786c2c067dc79acf5021b07c8fedfcfa4f29f7cc573118b8ad211"} Feb 16 17:37:49 crc kubenswrapper[4870]: I0216 17:37:49.761012 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7kzc" event={"ID":"01e5d5b9-01ec-4e85-910d-26a8ca382930","Type":"ContainerStarted","Data":"ab9069803deaa98cb93f0d57791eed5c46f4c7487ca9394fc469f08b8f5df0fc"} Feb 16 17:37:49 crc kubenswrapper[4870]: I0216 17:37:49.794265 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d7kzc" podStartSLOduration=3.336786819 podStartE2EDuration="8.794244338s" podCreationTimestamp="2026-02-16 17:37:41 +0000 UTC" firstStartedPulling="2026-02-16 17:37:43.697337228 +0000 UTC m=+2268.180801642" lastFinishedPulling="2026-02-16 17:37:49.154794777 +0000 UTC m=+2273.638259161" observedRunningTime="2026-02-16 17:37:49.788560557 +0000 UTC m=+2274.272024941" watchObservedRunningTime="2026-02-16 17:37:49.794244338 +0000 UTC m=+2274.277708722" Feb 16 17:37:50 crc kubenswrapper[4870]: I0216 17:37:50.223780 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:37:50 crc kubenswrapper[4870]: E0216 17:37:50.228426 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:37:52 crc kubenswrapper[4870]: I0216 17:37:52.314586 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d7kzc" Feb 16 17:37:52 crc kubenswrapper[4870]: I0216 17:37:52.314960 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d7kzc" Feb 16 17:37:52 crc kubenswrapper[4870]: I0216 17:37:52.360409 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d7kzc" Feb 16 17:37:53 crc kubenswrapper[4870]: E0216 17:37:53.226928 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:38:02 crc kubenswrapper[4870]: I0216 17:38:02.368148 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d7kzc" Feb 16 17:38:02 crc kubenswrapper[4870]: I0216 17:38:02.466115 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d7kzc"] Feb 16 17:38:02 crc kubenswrapper[4870]: I0216 17:38:02.512154 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s487d"] Feb 16 17:38:02 crc kubenswrapper[4870]: I0216 17:38:02.512835 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s487d" podUID="db6fcf97-0653-4411-b5ae-a3af8532801d" containerName="registry-server" containerID="cri-o://919e22497000a7483f11b11820ce3ca05744576f52aab7ce22bc80574db064bd" gracePeriod=2 Feb 16 17:38:02 crc kubenswrapper[4870]: I0216 17:38:02.909772 4870 generic.go:334] "Generic (PLEG): container finished" podID="db6fcf97-0653-4411-b5ae-a3af8532801d" containerID="919e22497000a7483f11b11820ce3ca05744576f52aab7ce22bc80574db064bd" exitCode=0 Feb 16 17:38:02 crc kubenswrapper[4870]: I0216 17:38:02.911163 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s487d" event={"ID":"db6fcf97-0653-4411-b5ae-a3af8532801d","Type":"ContainerDied","Data":"919e22497000a7483f11b11820ce3ca05744576f52aab7ce22bc80574db064bd"} Feb 16 17:38:03 crc kubenswrapper[4870]: I0216 17:38:03.069104 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s487d" Feb 16 17:38:03 crc kubenswrapper[4870]: I0216 17:38:03.170314 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db6fcf97-0653-4411-b5ae-a3af8532801d-catalog-content\") pod \"db6fcf97-0653-4411-b5ae-a3af8532801d\" (UID: \"db6fcf97-0653-4411-b5ae-a3af8532801d\") " Feb 16 17:38:03 crc kubenswrapper[4870]: I0216 17:38:03.170611 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7rsz\" (UniqueName: \"kubernetes.io/projected/db6fcf97-0653-4411-b5ae-a3af8532801d-kube-api-access-r7rsz\") pod \"db6fcf97-0653-4411-b5ae-a3af8532801d\" (UID: \"db6fcf97-0653-4411-b5ae-a3af8532801d\") " Feb 16 17:38:03 crc kubenswrapper[4870]: I0216 17:38:03.170727 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db6fcf97-0653-4411-b5ae-a3af8532801d-utilities\") pod \"db6fcf97-0653-4411-b5ae-a3af8532801d\" (UID: \"db6fcf97-0653-4411-b5ae-a3af8532801d\") " Feb 16 17:38:03 crc kubenswrapper[4870]: I0216 17:38:03.171738 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db6fcf97-0653-4411-b5ae-a3af8532801d-utilities" (OuterVolumeSpecName: "utilities") pod "db6fcf97-0653-4411-b5ae-a3af8532801d" (UID: "db6fcf97-0653-4411-b5ae-a3af8532801d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:38:03 crc kubenswrapper[4870]: I0216 17:38:03.195173 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db6fcf97-0653-4411-b5ae-a3af8532801d-kube-api-access-r7rsz" (OuterVolumeSpecName: "kube-api-access-r7rsz") pod "db6fcf97-0653-4411-b5ae-a3af8532801d" (UID: "db6fcf97-0653-4411-b5ae-a3af8532801d"). InnerVolumeSpecName "kube-api-access-r7rsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:38:03 crc kubenswrapper[4870]: I0216 17:38:03.230153 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db6fcf97-0653-4411-b5ae-a3af8532801d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db6fcf97-0653-4411-b5ae-a3af8532801d" (UID: "db6fcf97-0653-4411-b5ae-a3af8532801d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:38:03 crc kubenswrapper[4870]: I0216 17:38:03.275111 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db6fcf97-0653-4411-b5ae-a3af8532801d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:38:03 crc kubenswrapper[4870]: I0216 17:38:03.275176 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7rsz\" (UniqueName: \"kubernetes.io/projected/db6fcf97-0653-4411-b5ae-a3af8532801d-kube-api-access-r7rsz\") on node \"crc\" DevicePath \"\"" Feb 16 17:38:03 crc kubenswrapper[4870]: I0216 17:38:03.275193 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db6fcf97-0653-4411-b5ae-a3af8532801d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:38:03 crc kubenswrapper[4870]: I0216 17:38:03.921143 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s487d" event={"ID":"db6fcf97-0653-4411-b5ae-a3af8532801d","Type":"ContainerDied","Data":"787bb4458d0bfbe4aadec55476fab9ba6a9fb65ba8011f09e452a73ffacade2e"} Feb 16 17:38:03 crc kubenswrapper[4870]: I0216 17:38:03.921190 4870 scope.go:117] "RemoveContainer" containerID="919e22497000a7483f11b11820ce3ca05744576f52aab7ce22bc80574db064bd" Feb 16 17:38:03 crc kubenswrapper[4870]: I0216 17:38:03.921302 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s487d" Feb 16 17:38:03 crc kubenswrapper[4870]: I0216 17:38:03.951688 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s487d"] Feb 16 17:38:03 crc kubenswrapper[4870]: I0216 17:38:03.957189 4870 scope.go:117] "RemoveContainer" containerID="1e275c0f2addc28c03afb32453d4e3f3fa8ae3fdfccbdbf7d90a4d0a35c5db0d" Feb 16 17:38:03 crc kubenswrapper[4870]: I0216 17:38:03.964779 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s487d"] Feb 16 17:38:03 crc kubenswrapper[4870]: I0216 17:38:03.994480 4870 scope.go:117] "RemoveContainer" containerID="3e153e279f75e6afc18c29458a8afe3d86467394cfeaf46942bd75fd35608ad2" Feb 16 17:38:04 crc kubenswrapper[4870]: I0216 17:38:04.223347 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:38:04 crc kubenswrapper[4870]: E0216 17:38:04.224012 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:38:04 crc kubenswrapper[4870]: I0216 17:38:04.236056 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db6fcf97-0653-4411-b5ae-a3af8532801d" path="/var/lib/kubelet/pods/db6fcf97-0653-4411-b5ae-a3af8532801d/volumes" Feb 16 17:38:06 crc kubenswrapper[4870]: E0216 17:38:06.230631 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:38:19 crc kubenswrapper[4870]: I0216 17:38:19.223399 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:38:19 crc kubenswrapper[4870]: E0216 17:38:19.224246 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:38:19 crc kubenswrapper[4870]: E0216 17:38:19.225600 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:38:30 crc kubenswrapper[4870]: I0216 17:38:30.223430 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:38:30 crc kubenswrapper[4870]: E0216 17:38:30.224140 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:38:32 crc kubenswrapper[4870]: E0216 17:38:32.225532 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:38:45 crc kubenswrapper[4870]: I0216 17:38:45.223044 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:38:45 crc kubenswrapper[4870]: E0216 17:38:45.223737 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:38:46 crc kubenswrapper[4870]: E0216 17:38:46.235305 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:39:00 crc kubenswrapper[4870]: I0216 17:39:00.223776 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:39:00 crc kubenswrapper[4870]: E0216 17:39:00.224405 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:39:00 crc kubenswrapper[4870]: E0216 17:39:00.226123 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:39:12 crc kubenswrapper[4870]: I0216 17:39:12.223630 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:39:12 crc kubenswrapper[4870]: E0216 17:39:12.224724 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:39:13 crc kubenswrapper[4870]: E0216 17:39:13.226371 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:39:27 crc kubenswrapper[4870]: I0216 17:39:27.224581 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:39:27 crc kubenswrapper[4870]: E0216 17:39:27.226112 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:39:28 crc kubenswrapper[4870]: E0216 17:39:28.226124 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:39:42 crc kubenswrapper[4870]: I0216 17:39:42.223457 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:39:42 crc kubenswrapper[4870]: E0216 17:39:42.224304 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:39:42 crc kubenswrapper[4870]: E0216 17:39:42.225170 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:39:54 crc kubenswrapper[4870]: E0216 17:39:54.225033 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:39:56 crc kubenswrapper[4870]: I0216 17:39:56.227977 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:39:56 crc kubenswrapper[4870]: E0216 17:39:56.228617 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:40:09 crc kubenswrapper[4870]: E0216 17:40:09.225838 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:40:11 crc kubenswrapper[4870]: I0216 17:40:11.222688 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:40:11 crc kubenswrapper[4870]: E0216 17:40:11.241165 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:40:23 crc kubenswrapper[4870]: E0216 17:40:23.224691 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:40:25 crc kubenswrapper[4870]: I0216 17:40:25.222845 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:40:25 crc kubenswrapper[4870]: E0216 17:40:25.223467 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:40:35 crc kubenswrapper[4870]: E0216 17:40:35.227996 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:40:37 crc kubenswrapper[4870]: I0216 17:40:37.223897 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:40:37 crc kubenswrapper[4870]: E0216 17:40:37.224628 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:40:48 crc kubenswrapper[4870]: E0216 17:40:48.225479 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:40:52 crc kubenswrapper[4870]: I0216 17:40:52.223522 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:40:52 crc kubenswrapper[4870]: E0216 17:40:52.224327 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:40:58 crc kubenswrapper[4870]: I0216 17:40:58.558644 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5wbkp"] Feb 16 17:40:58 crc kubenswrapper[4870]: E0216 17:40:58.559536 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db6fcf97-0653-4411-b5ae-a3af8532801d" containerName="registry-server" Feb 16 17:40:58 crc kubenswrapper[4870]: I0216 17:40:58.559548 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="db6fcf97-0653-4411-b5ae-a3af8532801d" containerName="registry-server" Feb 16 17:40:58 crc kubenswrapper[4870]: E0216 17:40:58.559561 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db6fcf97-0653-4411-b5ae-a3af8532801d" containerName="extract-content" Feb 16 17:40:58 crc kubenswrapper[4870]: I0216 17:40:58.559568 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="db6fcf97-0653-4411-b5ae-a3af8532801d" containerName="extract-content" Feb 16 17:40:58 crc kubenswrapper[4870]: E0216 17:40:58.559607 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db6fcf97-0653-4411-b5ae-a3af8532801d" containerName="extract-utilities" Feb 16 17:40:58 crc kubenswrapper[4870]: I0216 17:40:58.559613 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="db6fcf97-0653-4411-b5ae-a3af8532801d" containerName="extract-utilities" Feb 16 17:40:58 crc kubenswrapper[4870]: I0216 17:40:58.559807 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="db6fcf97-0653-4411-b5ae-a3af8532801d" containerName="registry-server" Feb 16 17:40:58 crc kubenswrapper[4870]: I0216 17:40:58.561507 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wbkp" Feb 16 17:40:58 crc kubenswrapper[4870]: I0216 17:40:58.568326 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5wbkp"] Feb 16 17:40:58 crc kubenswrapper[4870]: I0216 17:40:58.645601 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7133a5eb-0201-4055-b3f4-1e7aa71e9233-catalog-content\") pod \"community-operators-5wbkp\" (UID: \"7133a5eb-0201-4055-b3f4-1e7aa71e9233\") " pod="openshift-marketplace/community-operators-5wbkp" Feb 16 17:40:58 crc kubenswrapper[4870]: I0216 17:40:58.645678 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v44hc\" (UniqueName: \"kubernetes.io/projected/7133a5eb-0201-4055-b3f4-1e7aa71e9233-kube-api-access-v44hc\") pod \"community-operators-5wbkp\" (UID: \"7133a5eb-0201-4055-b3f4-1e7aa71e9233\") " pod="openshift-marketplace/community-operators-5wbkp" Feb 16 17:40:58 crc kubenswrapper[4870]: I0216 17:40:58.646030 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7133a5eb-0201-4055-b3f4-1e7aa71e9233-utilities\") pod \"community-operators-5wbkp\" (UID: \"7133a5eb-0201-4055-b3f4-1e7aa71e9233\") " pod="openshift-marketplace/community-operators-5wbkp" Feb 16 17:40:58 crc kubenswrapper[4870]: I0216 17:40:58.747741 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7133a5eb-0201-4055-b3f4-1e7aa71e9233-utilities\") pod \"community-operators-5wbkp\" (UID: \"7133a5eb-0201-4055-b3f4-1e7aa71e9233\") " pod="openshift-marketplace/community-operators-5wbkp" Feb 16 17:40:58 crc kubenswrapper[4870]: I0216 17:40:58.747863 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7133a5eb-0201-4055-b3f4-1e7aa71e9233-catalog-content\") pod \"community-operators-5wbkp\" (UID: \"7133a5eb-0201-4055-b3f4-1e7aa71e9233\") " pod="openshift-marketplace/community-operators-5wbkp" Feb 16 17:40:58 crc kubenswrapper[4870]: I0216 17:40:58.747939 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v44hc\" (UniqueName: \"kubernetes.io/projected/7133a5eb-0201-4055-b3f4-1e7aa71e9233-kube-api-access-v44hc\") pod \"community-operators-5wbkp\" (UID: \"7133a5eb-0201-4055-b3f4-1e7aa71e9233\") " pod="openshift-marketplace/community-operators-5wbkp" Feb 16 17:40:58 crc kubenswrapper[4870]: I0216 17:40:58.748325 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7133a5eb-0201-4055-b3f4-1e7aa71e9233-utilities\") pod \"community-operators-5wbkp\" (UID: \"7133a5eb-0201-4055-b3f4-1e7aa71e9233\") " pod="openshift-marketplace/community-operators-5wbkp" Feb 16 17:40:58 crc kubenswrapper[4870]: I0216 17:40:58.748399 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7133a5eb-0201-4055-b3f4-1e7aa71e9233-catalog-content\") pod \"community-operators-5wbkp\" (UID: \"7133a5eb-0201-4055-b3f4-1e7aa71e9233\") " pod="openshift-marketplace/community-operators-5wbkp" Feb 16 17:40:58 crc kubenswrapper[4870]: I0216 17:40:58.775869 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v44hc\" (UniqueName: \"kubernetes.io/projected/7133a5eb-0201-4055-b3f4-1e7aa71e9233-kube-api-access-v44hc\") pod \"community-operators-5wbkp\" (UID: \"7133a5eb-0201-4055-b3f4-1e7aa71e9233\") " pod="openshift-marketplace/community-operators-5wbkp" Feb 16 17:40:58 crc kubenswrapper[4870]: I0216 17:40:58.880678 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wbkp" Feb 16 17:40:59 crc kubenswrapper[4870]: W0216 17:40:59.460005 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7133a5eb_0201_4055_b3f4_1e7aa71e9233.slice/crio-de73f342d9efe278656137305e3a107e81dea2fccd3ceea43950f89cc5b8fdea WatchSource:0}: Error finding container de73f342d9efe278656137305e3a107e81dea2fccd3ceea43950f89cc5b8fdea: Status 404 returned error can't find the container with id de73f342d9efe278656137305e3a107e81dea2fccd3ceea43950f89cc5b8fdea Feb 16 17:40:59 crc kubenswrapper[4870]: I0216 17:40:59.460451 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5wbkp"] Feb 16 17:40:59 crc kubenswrapper[4870]: I0216 17:40:59.631881 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wbkp" event={"ID":"7133a5eb-0201-4055-b3f4-1e7aa71e9233","Type":"ContainerStarted","Data":"de73f342d9efe278656137305e3a107e81dea2fccd3ceea43950f89cc5b8fdea"} Feb 16 17:41:00 crc kubenswrapper[4870]: I0216 17:41:00.655122 4870 generic.go:334] "Generic (PLEG): container finished" podID="7133a5eb-0201-4055-b3f4-1e7aa71e9233" containerID="50c958539ddea9f86e0f5ac7408edc28db5ebd0b797c2428c87e1af665140679" exitCode=0 Feb 16 17:41:00 crc kubenswrapper[4870]: I0216 17:41:00.655461 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wbkp" event={"ID":"7133a5eb-0201-4055-b3f4-1e7aa71e9233","Type":"ContainerDied","Data":"50c958539ddea9f86e0f5ac7408edc28db5ebd0b797c2428c87e1af665140679"} Feb 16 17:41:01 crc kubenswrapper[4870]: E0216 17:41:01.224082 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:41:01 crc kubenswrapper[4870]: I0216 17:41:01.667677 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wbkp" event={"ID":"7133a5eb-0201-4055-b3f4-1e7aa71e9233","Type":"ContainerStarted","Data":"2795ab57e12cb09966b556c6ea2fe2b41317c78e7f8b217dd85af5c84ee465bb"} Feb 16 17:41:02 crc kubenswrapper[4870]: I0216 17:41:02.680114 4870 generic.go:334] "Generic (PLEG): container finished" podID="7133a5eb-0201-4055-b3f4-1e7aa71e9233" containerID="2795ab57e12cb09966b556c6ea2fe2b41317c78e7f8b217dd85af5c84ee465bb" exitCode=0 Feb 16 17:41:02 crc kubenswrapper[4870]: I0216 17:41:02.680220 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wbkp" event={"ID":"7133a5eb-0201-4055-b3f4-1e7aa71e9233","Type":"ContainerDied","Data":"2795ab57e12cb09966b556c6ea2fe2b41317c78e7f8b217dd85af5c84ee465bb"} Feb 16 17:41:03 crc kubenswrapper[4870]: I0216 17:41:03.694385 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wbkp" event={"ID":"7133a5eb-0201-4055-b3f4-1e7aa71e9233","Type":"ContainerStarted","Data":"871bfa051f3a383fd64f737e4122497bffd15ce9b4d470d38464a817135f662f"} Feb 16 17:41:03 crc kubenswrapper[4870]: I0216 17:41:03.732333 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5wbkp" podStartSLOduration=3.248176846 podStartE2EDuration="5.732310129s" podCreationTimestamp="2026-02-16 17:40:58 +0000 UTC" firstStartedPulling="2026-02-16 17:41:00.659543728 +0000 UTC m=+2465.143008132" lastFinishedPulling="2026-02-16 17:41:03.143677031 +0000 UTC m=+2467.627141415" observedRunningTime="2026-02-16 17:41:03.723120428 +0000 UTC m=+2468.206584852" watchObservedRunningTime="2026-02-16 17:41:03.732310129 +0000 UTC m=+2468.215774513" Feb 16 17:41:07 crc kubenswrapper[4870]: I0216 17:41:07.223320 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:41:07 crc kubenswrapper[4870]: E0216 17:41:07.224050 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:41:08 crc kubenswrapper[4870]: I0216 17:41:08.881110 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5wbkp" Feb 16 17:41:08 crc kubenswrapper[4870]: I0216 17:41:08.881163 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5wbkp" Feb 16 17:41:08 crc kubenswrapper[4870]: I0216 17:41:08.929207 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5wbkp" Feb 16 17:41:09 crc kubenswrapper[4870]: I0216 17:41:09.811741 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5wbkp" Feb 16 17:41:09 crc kubenswrapper[4870]: I0216 17:41:09.870856 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5wbkp"] Feb 16 17:41:11 crc kubenswrapper[4870]: I0216 17:41:11.770134 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5wbkp" podUID="7133a5eb-0201-4055-b3f4-1e7aa71e9233" containerName="registry-server" containerID="cri-o://871bfa051f3a383fd64f737e4122497bffd15ce9b4d470d38464a817135f662f" gracePeriod=2 Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.253153 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wbkp" Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.433927 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7133a5eb-0201-4055-b3f4-1e7aa71e9233-catalog-content\") pod \"7133a5eb-0201-4055-b3f4-1e7aa71e9233\" (UID: \"7133a5eb-0201-4055-b3f4-1e7aa71e9233\") " Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.434332 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7133a5eb-0201-4055-b3f4-1e7aa71e9233-utilities\") pod \"7133a5eb-0201-4055-b3f4-1e7aa71e9233\" (UID: \"7133a5eb-0201-4055-b3f4-1e7aa71e9233\") " Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.434427 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v44hc\" (UniqueName: \"kubernetes.io/projected/7133a5eb-0201-4055-b3f4-1e7aa71e9233-kube-api-access-v44hc\") pod \"7133a5eb-0201-4055-b3f4-1e7aa71e9233\" (UID: \"7133a5eb-0201-4055-b3f4-1e7aa71e9233\") " Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.436596 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7133a5eb-0201-4055-b3f4-1e7aa71e9233-utilities" (OuterVolumeSpecName: "utilities") pod "7133a5eb-0201-4055-b3f4-1e7aa71e9233" (UID: "7133a5eb-0201-4055-b3f4-1e7aa71e9233"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.441537 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7133a5eb-0201-4055-b3f4-1e7aa71e9233-kube-api-access-v44hc" (OuterVolumeSpecName: "kube-api-access-v44hc") pod "7133a5eb-0201-4055-b3f4-1e7aa71e9233" (UID: "7133a5eb-0201-4055-b3f4-1e7aa71e9233"). InnerVolumeSpecName "kube-api-access-v44hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.488131 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7133a5eb-0201-4055-b3f4-1e7aa71e9233-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7133a5eb-0201-4055-b3f4-1e7aa71e9233" (UID: "7133a5eb-0201-4055-b3f4-1e7aa71e9233"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.537110 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7133a5eb-0201-4055-b3f4-1e7aa71e9233-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.537154 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v44hc\" (UniqueName: \"kubernetes.io/projected/7133a5eb-0201-4055-b3f4-1e7aa71e9233-kube-api-access-v44hc\") on node \"crc\" DevicePath \"\"" Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.537168 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7133a5eb-0201-4055-b3f4-1e7aa71e9233-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.780776 4870 generic.go:334] "Generic (PLEG): container finished" podID="7133a5eb-0201-4055-b3f4-1e7aa71e9233" containerID="871bfa051f3a383fd64f737e4122497bffd15ce9b4d470d38464a817135f662f" exitCode=0 Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.780835 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wbkp" Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.780840 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wbkp" event={"ID":"7133a5eb-0201-4055-b3f4-1e7aa71e9233","Type":"ContainerDied","Data":"871bfa051f3a383fd64f737e4122497bffd15ce9b4d470d38464a817135f662f"} Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.780870 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wbkp" event={"ID":"7133a5eb-0201-4055-b3f4-1e7aa71e9233","Type":"ContainerDied","Data":"de73f342d9efe278656137305e3a107e81dea2fccd3ceea43950f89cc5b8fdea"} Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.780891 4870 scope.go:117] "RemoveContainer" containerID="871bfa051f3a383fd64f737e4122497bffd15ce9b4d470d38464a817135f662f" Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.801631 4870 scope.go:117] "RemoveContainer" containerID="2795ab57e12cb09966b556c6ea2fe2b41317c78e7f8b217dd85af5c84ee465bb" Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.828335 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5wbkp"] Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.830027 4870 scope.go:117] "RemoveContainer" containerID="50c958539ddea9f86e0f5ac7408edc28db5ebd0b797c2428c87e1af665140679" Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.840786 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5wbkp"] Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.897180 4870 scope.go:117] "RemoveContainer" containerID="871bfa051f3a383fd64f737e4122497bffd15ce9b4d470d38464a817135f662f" Feb 16 17:41:12 crc kubenswrapper[4870]: E0216 17:41:12.897706 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"871bfa051f3a383fd64f737e4122497bffd15ce9b4d470d38464a817135f662f\": container with ID starting with 871bfa051f3a383fd64f737e4122497bffd15ce9b4d470d38464a817135f662f not found: ID does not exist" containerID="871bfa051f3a383fd64f737e4122497bffd15ce9b4d470d38464a817135f662f" Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.897734 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"871bfa051f3a383fd64f737e4122497bffd15ce9b4d470d38464a817135f662f"} err="failed to get container status \"871bfa051f3a383fd64f737e4122497bffd15ce9b4d470d38464a817135f662f\": rpc error: code = NotFound desc = could not find container \"871bfa051f3a383fd64f737e4122497bffd15ce9b4d470d38464a817135f662f\": container with ID starting with 871bfa051f3a383fd64f737e4122497bffd15ce9b4d470d38464a817135f662f not found: ID does not exist" Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.897754 4870 scope.go:117] "RemoveContainer" containerID="2795ab57e12cb09966b556c6ea2fe2b41317c78e7f8b217dd85af5c84ee465bb" Feb 16 17:41:12 crc kubenswrapper[4870]: E0216 17:41:12.897982 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2795ab57e12cb09966b556c6ea2fe2b41317c78e7f8b217dd85af5c84ee465bb\": container with ID starting with 2795ab57e12cb09966b556c6ea2fe2b41317c78e7f8b217dd85af5c84ee465bb not found: ID does not exist" containerID="2795ab57e12cb09966b556c6ea2fe2b41317c78e7f8b217dd85af5c84ee465bb" Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.898007 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2795ab57e12cb09966b556c6ea2fe2b41317c78e7f8b217dd85af5c84ee465bb"} err="failed to get container status \"2795ab57e12cb09966b556c6ea2fe2b41317c78e7f8b217dd85af5c84ee465bb\": rpc error: code = NotFound desc = could not find container \"2795ab57e12cb09966b556c6ea2fe2b41317c78e7f8b217dd85af5c84ee465bb\": container with ID starting with 2795ab57e12cb09966b556c6ea2fe2b41317c78e7f8b217dd85af5c84ee465bb not found: ID does not exist" Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.898020 4870 scope.go:117] "RemoveContainer" containerID="50c958539ddea9f86e0f5ac7408edc28db5ebd0b797c2428c87e1af665140679" Feb 16 17:41:12 crc kubenswrapper[4870]: E0216 17:41:12.898213 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50c958539ddea9f86e0f5ac7408edc28db5ebd0b797c2428c87e1af665140679\": container with ID starting with 50c958539ddea9f86e0f5ac7408edc28db5ebd0b797c2428c87e1af665140679 not found: ID does not exist" containerID="50c958539ddea9f86e0f5ac7408edc28db5ebd0b797c2428c87e1af665140679" Feb 16 17:41:12 crc kubenswrapper[4870]: I0216 17:41:12.898233 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50c958539ddea9f86e0f5ac7408edc28db5ebd0b797c2428c87e1af665140679"} err="failed to get container status \"50c958539ddea9f86e0f5ac7408edc28db5ebd0b797c2428c87e1af665140679\": rpc error: code = NotFound desc = could not find container \"50c958539ddea9f86e0f5ac7408edc28db5ebd0b797c2428c87e1af665140679\": container with ID starting with 50c958539ddea9f86e0f5ac7408edc28db5ebd0b797c2428c87e1af665140679 not found: ID does not exist" Feb 16 17:41:13 crc kubenswrapper[4870]: E0216 17:41:13.224433 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:41:14 crc kubenswrapper[4870]: I0216 17:41:14.240146 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7133a5eb-0201-4055-b3f4-1e7aa71e9233" path="/var/lib/kubelet/pods/7133a5eb-0201-4055-b3f4-1e7aa71e9233/volumes" Feb 16 17:41:22 crc kubenswrapper[4870]: I0216 17:41:22.222774 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:41:22 crc kubenswrapper[4870]: E0216 17:41:22.223744 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:41:24 crc kubenswrapper[4870]: I0216 17:41:24.225000 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:41:24 crc kubenswrapper[4870]: E0216 17:41:24.343163 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:41:24 crc kubenswrapper[4870]: E0216 17:41:24.343247 4870 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:41:24 crc kubenswrapper[4870]: E0216 17:41:24.343433 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7b2p7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6hkdm_openstack(34a86750-1fff-4add-8462-7ab805ec7f89): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:41:24 crc kubenswrapper[4870]: E0216 17:41:24.344630 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:41:36 crc kubenswrapper[4870]: I0216 17:41:36.231342 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:41:37 crc kubenswrapper[4870]: I0216 17:41:37.089505 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerStarted","Data":"de06fc70ac67756f5abffd74866f8bf6e2870cab5fda4981e1540066f7cba926"} Feb 16 17:41:39 crc kubenswrapper[4870]: E0216 17:41:39.225477 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:41:54 crc kubenswrapper[4870]: E0216 17:41:54.225634 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:42:09 crc kubenswrapper[4870]: E0216 17:42:09.224583 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:42:24 crc kubenswrapper[4870]: E0216 17:42:24.225171 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:42:36 crc kubenswrapper[4870]: E0216 17:42:36.231711 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:42:49 crc kubenswrapper[4870]: E0216 17:42:49.226022 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:43:03 crc kubenswrapper[4870]: E0216 17:43:03.226076 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:43:15 crc kubenswrapper[4870]: E0216 17:43:15.226031 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:43:29 crc kubenswrapper[4870]: E0216 17:43:29.225643 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:43:43 crc kubenswrapper[4870]: E0216 17:43:43.226156 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:43:45 crc kubenswrapper[4870]: I0216 17:43:45.842482 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vk6pz"] Feb 16 17:43:45 crc kubenswrapper[4870]: E0216 17:43:45.843807 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7133a5eb-0201-4055-b3f4-1e7aa71e9233" containerName="extract-content" Feb 16 17:43:45 crc kubenswrapper[4870]: I0216 17:43:45.843839 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="7133a5eb-0201-4055-b3f4-1e7aa71e9233" containerName="extract-content" Feb 16 17:43:45 crc kubenswrapper[4870]: E0216 17:43:45.843906 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7133a5eb-0201-4055-b3f4-1e7aa71e9233" containerName="extract-utilities" Feb 16 17:43:45 crc kubenswrapper[4870]: I0216 17:43:45.843930 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="7133a5eb-0201-4055-b3f4-1e7aa71e9233" containerName="extract-utilities" Feb 16 17:43:45 crc kubenswrapper[4870]: E0216 17:43:45.843991 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7133a5eb-0201-4055-b3f4-1e7aa71e9233" containerName="registry-server" Feb 16 17:43:45 crc kubenswrapper[4870]: I0216 17:43:45.844010 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="7133a5eb-0201-4055-b3f4-1e7aa71e9233" containerName="registry-server" Feb 16 17:43:45 crc kubenswrapper[4870]: I0216 17:43:45.844494 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="7133a5eb-0201-4055-b3f4-1e7aa71e9233" containerName="registry-server" Feb 16 17:43:45 crc kubenswrapper[4870]: I0216 17:43:45.848313 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vk6pz" Feb 16 17:43:45 crc kubenswrapper[4870]: I0216 17:43:45.867989 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vk6pz"] Feb 16 17:43:46 crc kubenswrapper[4870]: I0216 17:43:46.002132 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m64s4\" (UniqueName: \"kubernetes.io/projected/9cd6f754-c555-471e-926f-8f81525bfd2b-kube-api-access-m64s4\") pod \"redhat-operators-vk6pz\" (UID: \"9cd6f754-c555-471e-926f-8f81525bfd2b\") " pod="openshift-marketplace/redhat-operators-vk6pz" Feb 16 17:43:46 crc kubenswrapper[4870]: I0216 17:43:46.002264 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd6f754-c555-471e-926f-8f81525bfd2b-utilities\") pod \"redhat-operators-vk6pz\" (UID: \"9cd6f754-c555-471e-926f-8f81525bfd2b\") " pod="openshift-marketplace/redhat-operators-vk6pz" Feb 16 17:43:46 crc kubenswrapper[4870]: I0216 17:43:46.002411 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd6f754-c555-471e-926f-8f81525bfd2b-catalog-content\") pod \"redhat-operators-vk6pz\" (UID: \"9cd6f754-c555-471e-926f-8f81525bfd2b\") " pod="openshift-marketplace/redhat-operators-vk6pz" Feb 16 17:43:46 crc kubenswrapper[4870]: I0216 17:43:46.104126 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m64s4\" (UniqueName: \"kubernetes.io/projected/9cd6f754-c555-471e-926f-8f81525bfd2b-kube-api-access-m64s4\") pod \"redhat-operators-vk6pz\" (UID: \"9cd6f754-c555-471e-926f-8f81525bfd2b\") " pod="openshift-marketplace/redhat-operators-vk6pz" Feb 16 17:43:46 crc kubenswrapper[4870]: I0216 17:43:46.104239 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd6f754-c555-471e-926f-8f81525bfd2b-utilities\") pod \"redhat-operators-vk6pz\" (UID: \"9cd6f754-c555-471e-926f-8f81525bfd2b\") " pod="openshift-marketplace/redhat-operators-vk6pz" Feb 16 17:43:46 crc kubenswrapper[4870]: I0216 17:43:46.104348 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd6f754-c555-471e-926f-8f81525bfd2b-catalog-content\") pod \"redhat-operators-vk6pz\" (UID: \"9cd6f754-c555-471e-926f-8f81525bfd2b\") " pod="openshift-marketplace/redhat-operators-vk6pz" Feb 16 17:43:46 crc kubenswrapper[4870]: I0216 17:43:46.104831 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd6f754-c555-471e-926f-8f81525bfd2b-catalog-content\") pod \"redhat-operators-vk6pz\" (UID: \"9cd6f754-c555-471e-926f-8f81525bfd2b\") " pod="openshift-marketplace/redhat-operators-vk6pz" Feb 16 17:43:46 crc kubenswrapper[4870]: I0216 17:43:46.104827 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd6f754-c555-471e-926f-8f81525bfd2b-utilities\") pod \"redhat-operators-vk6pz\" (UID: \"9cd6f754-c555-471e-926f-8f81525bfd2b\") " pod="openshift-marketplace/redhat-operators-vk6pz" Feb 16 17:43:46 crc kubenswrapper[4870]: I0216 17:43:46.141026 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m64s4\" (UniqueName: \"kubernetes.io/projected/9cd6f754-c555-471e-926f-8f81525bfd2b-kube-api-access-m64s4\") pod \"redhat-operators-vk6pz\" (UID: \"9cd6f754-c555-471e-926f-8f81525bfd2b\") " pod="openshift-marketplace/redhat-operators-vk6pz" Feb 16 17:43:46 crc kubenswrapper[4870]: I0216 17:43:46.178070 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vk6pz" Feb 16 17:43:46 crc kubenswrapper[4870]: I0216 17:43:46.650982 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vk6pz"] Feb 16 17:43:47 crc kubenswrapper[4870]: I0216 17:43:47.416648 4870 generic.go:334] "Generic (PLEG): container finished" podID="9cd6f754-c555-471e-926f-8f81525bfd2b" containerID="6a8a70bc8d2614d1c14e7eda510f9de6801d7bf742db294f7b4536acc3be8a28" exitCode=0 Feb 16 17:43:47 crc kubenswrapper[4870]: I0216 17:43:47.416699 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk6pz" event={"ID":"9cd6f754-c555-471e-926f-8f81525bfd2b","Type":"ContainerDied","Data":"6a8a70bc8d2614d1c14e7eda510f9de6801d7bf742db294f7b4536acc3be8a28"} Feb 16 17:43:47 crc kubenswrapper[4870]: I0216 17:43:47.416926 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk6pz" event={"ID":"9cd6f754-c555-471e-926f-8f81525bfd2b","Type":"ContainerStarted","Data":"6a78c5d8bc35a805617313b552cf3943a49ee42d678941bfffe2350aef4ba930"} Feb 16 17:43:48 crc kubenswrapper[4870]: I0216 17:43:48.429763 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk6pz" event={"ID":"9cd6f754-c555-471e-926f-8f81525bfd2b","Type":"ContainerStarted","Data":"5184baed9a0a0caa8000d00b25c539b22b92723f3ff498684c4217f5fa81997e"} Feb 16 17:43:51 crc kubenswrapper[4870]: I0216 17:43:51.461115 4870 generic.go:334] "Generic (PLEG): container finished" podID="9cd6f754-c555-471e-926f-8f81525bfd2b" containerID="5184baed9a0a0caa8000d00b25c539b22b92723f3ff498684c4217f5fa81997e" exitCode=0 Feb 16 17:43:51 crc kubenswrapper[4870]: I0216 17:43:51.461230 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk6pz" event={"ID":"9cd6f754-c555-471e-926f-8f81525bfd2b","Type":"ContainerDied","Data":"5184baed9a0a0caa8000d00b25c539b22b92723f3ff498684c4217f5fa81997e"} Feb 16 17:43:52 crc kubenswrapper[4870]: I0216 17:43:52.472933 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk6pz" event={"ID":"9cd6f754-c555-471e-926f-8f81525bfd2b","Type":"ContainerStarted","Data":"f5d66fa1b2c3d929ca039ea2b6caacada388a722dbae7c261642d5e4c112cb47"} Feb 16 17:43:52 crc kubenswrapper[4870]: I0216 17:43:52.504789 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vk6pz" podStartSLOduration=2.856316924 podStartE2EDuration="7.504766583s" podCreationTimestamp="2026-02-16 17:43:45 +0000 UTC" firstStartedPulling="2026-02-16 17:43:47.41861114 +0000 UTC m=+2631.902075524" lastFinishedPulling="2026-02-16 17:43:52.067060789 +0000 UTC m=+2636.550525183" observedRunningTime="2026-02-16 17:43:52.496713344 +0000 UTC m=+2636.980177728" watchObservedRunningTime="2026-02-16 17:43:52.504766583 +0000 UTC m=+2636.988230977" Feb 16 17:43:54 crc kubenswrapper[4870]: E0216 17:43:54.226306 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:43:56 crc kubenswrapper[4870]: I0216 17:43:56.179212 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vk6pz" Feb 16 17:43:56 crc kubenswrapper[4870]: I0216 17:43:56.180522 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vk6pz" Feb 16 17:43:57 crc kubenswrapper[4870]: I0216 17:43:57.268915 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vk6pz" podUID="9cd6f754-c555-471e-926f-8f81525bfd2b" containerName="registry-server" probeResult="failure" output=< Feb 16 17:43:57 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Feb 16 17:43:57 crc kubenswrapper[4870]: > Feb 16 17:44:05 crc kubenswrapper[4870]: I0216 17:44:05.367049 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:44:05 crc kubenswrapper[4870]: I0216 17:44:05.367611 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:44:06 crc kubenswrapper[4870]: I0216 17:44:06.239098 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vk6pz" Feb 16 17:44:06 crc kubenswrapper[4870]: I0216 17:44:06.298229 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vk6pz" Feb 16 17:44:06 crc kubenswrapper[4870]: I0216 17:44:06.477350 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vk6pz"] Feb 16 17:44:07 crc kubenswrapper[4870]: I0216 17:44:07.644846 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vk6pz" podUID="9cd6f754-c555-471e-926f-8f81525bfd2b" containerName="registry-server" containerID="cri-o://f5d66fa1b2c3d929ca039ea2b6caacada388a722dbae7c261642d5e4c112cb47" gracePeriod=2 Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.161972 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vk6pz" Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.284911 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m64s4\" (UniqueName: \"kubernetes.io/projected/9cd6f754-c555-471e-926f-8f81525bfd2b-kube-api-access-m64s4\") pod \"9cd6f754-c555-471e-926f-8f81525bfd2b\" (UID: \"9cd6f754-c555-471e-926f-8f81525bfd2b\") " Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.285152 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd6f754-c555-471e-926f-8f81525bfd2b-catalog-content\") pod \"9cd6f754-c555-471e-926f-8f81525bfd2b\" (UID: \"9cd6f754-c555-471e-926f-8f81525bfd2b\") " Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.285330 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd6f754-c555-471e-926f-8f81525bfd2b-utilities\") pod \"9cd6f754-c555-471e-926f-8f81525bfd2b\" (UID: \"9cd6f754-c555-471e-926f-8f81525bfd2b\") " Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.286295 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cd6f754-c555-471e-926f-8f81525bfd2b-utilities" (OuterVolumeSpecName: "utilities") pod "9cd6f754-c555-471e-926f-8f81525bfd2b" (UID: "9cd6f754-c555-471e-926f-8f81525bfd2b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.286815 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd6f754-c555-471e-926f-8f81525bfd2b-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.292681 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd6f754-c555-471e-926f-8f81525bfd2b-kube-api-access-m64s4" (OuterVolumeSpecName: "kube-api-access-m64s4") pod "9cd6f754-c555-471e-926f-8f81525bfd2b" (UID: "9cd6f754-c555-471e-926f-8f81525bfd2b"). InnerVolumeSpecName "kube-api-access-m64s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.388932 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m64s4\" (UniqueName: \"kubernetes.io/projected/9cd6f754-c555-471e-926f-8f81525bfd2b-kube-api-access-m64s4\") on node \"crc\" DevicePath \"\"" Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.427598 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cd6f754-c555-471e-926f-8f81525bfd2b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cd6f754-c555-471e-926f-8f81525bfd2b" (UID: "9cd6f754-c555-471e-926f-8f81525bfd2b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.490768 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd6f754-c555-471e-926f-8f81525bfd2b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.654276 4870 generic.go:334] "Generic (PLEG): container finished" podID="9cd6f754-c555-471e-926f-8f81525bfd2b" containerID="f5d66fa1b2c3d929ca039ea2b6caacada388a722dbae7c261642d5e4c112cb47" exitCode=0 Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.654327 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk6pz" event={"ID":"9cd6f754-c555-471e-926f-8f81525bfd2b","Type":"ContainerDied","Data":"f5d66fa1b2c3d929ca039ea2b6caacada388a722dbae7c261642d5e4c112cb47"} Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.654357 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk6pz" event={"ID":"9cd6f754-c555-471e-926f-8f81525bfd2b","Type":"ContainerDied","Data":"6a78c5d8bc35a805617313b552cf3943a49ee42d678941bfffe2350aef4ba930"} Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.654369 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vk6pz" Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.654377 4870 scope.go:117] "RemoveContainer" containerID="f5d66fa1b2c3d929ca039ea2b6caacada388a722dbae7c261642d5e4c112cb47" Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.705881 4870 scope.go:117] "RemoveContainer" containerID="5184baed9a0a0caa8000d00b25c539b22b92723f3ff498684c4217f5fa81997e" Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.718390 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vk6pz"] Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.726132 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vk6pz"] Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.735199 4870 scope.go:117] "RemoveContainer" containerID="6a8a70bc8d2614d1c14e7eda510f9de6801d7bf742db294f7b4536acc3be8a28" Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.793546 4870 scope.go:117] "RemoveContainer" containerID="f5d66fa1b2c3d929ca039ea2b6caacada388a722dbae7c261642d5e4c112cb47" Feb 16 17:44:08 crc kubenswrapper[4870]: E0216 17:44:08.793962 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5d66fa1b2c3d929ca039ea2b6caacada388a722dbae7c261642d5e4c112cb47\": container with ID starting with f5d66fa1b2c3d929ca039ea2b6caacada388a722dbae7c261642d5e4c112cb47 not found: ID does not exist" containerID="f5d66fa1b2c3d929ca039ea2b6caacada388a722dbae7c261642d5e4c112cb47" Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.794003 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5d66fa1b2c3d929ca039ea2b6caacada388a722dbae7c261642d5e4c112cb47"} err="failed to get container status \"f5d66fa1b2c3d929ca039ea2b6caacada388a722dbae7c261642d5e4c112cb47\": rpc error: code = NotFound desc = could not find container \"f5d66fa1b2c3d929ca039ea2b6caacada388a722dbae7c261642d5e4c112cb47\": container with ID starting with f5d66fa1b2c3d929ca039ea2b6caacada388a722dbae7c261642d5e4c112cb47 not found: ID does not exist" Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.794030 4870 scope.go:117] "RemoveContainer" containerID="5184baed9a0a0caa8000d00b25c539b22b92723f3ff498684c4217f5fa81997e" Feb 16 17:44:08 crc kubenswrapper[4870]: E0216 17:44:08.794313 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5184baed9a0a0caa8000d00b25c539b22b92723f3ff498684c4217f5fa81997e\": container with ID starting with 5184baed9a0a0caa8000d00b25c539b22b92723f3ff498684c4217f5fa81997e not found: ID does not exist" containerID="5184baed9a0a0caa8000d00b25c539b22b92723f3ff498684c4217f5fa81997e" Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.794355 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5184baed9a0a0caa8000d00b25c539b22b92723f3ff498684c4217f5fa81997e"} err="failed to get container status \"5184baed9a0a0caa8000d00b25c539b22b92723f3ff498684c4217f5fa81997e\": rpc error: code = NotFound desc = could not find container \"5184baed9a0a0caa8000d00b25c539b22b92723f3ff498684c4217f5fa81997e\": container with ID starting with 5184baed9a0a0caa8000d00b25c539b22b92723f3ff498684c4217f5fa81997e not found: ID does not exist" Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.794389 4870 scope.go:117] "RemoveContainer" containerID="6a8a70bc8d2614d1c14e7eda510f9de6801d7bf742db294f7b4536acc3be8a28" Feb 16 17:44:08 crc kubenswrapper[4870]: E0216 17:44:08.794721 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a8a70bc8d2614d1c14e7eda510f9de6801d7bf742db294f7b4536acc3be8a28\": container with ID starting with 6a8a70bc8d2614d1c14e7eda510f9de6801d7bf742db294f7b4536acc3be8a28 not found: ID does not exist" containerID="6a8a70bc8d2614d1c14e7eda510f9de6801d7bf742db294f7b4536acc3be8a28" Feb 16 17:44:08 crc kubenswrapper[4870]: I0216 17:44:08.794745 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a8a70bc8d2614d1c14e7eda510f9de6801d7bf742db294f7b4536acc3be8a28"} err="failed to get container status \"6a8a70bc8d2614d1c14e7eda510f9de6801d7bf742db294f7b4536acc3be8a28\": rpc error: code = NotFound desc = could not find container \"6a8a70bc8d2614d1c14e7eda510f9de6801d7bf742db294f7b4536acc3be8a28\": container with ID starting with 6a8a70bc8d2614d1c14e7eda510f9de6801d7bf742db294f7b4536acc3be8a28 not found: ID does not exist" Feb 16 17:44:09 crc kubenswrapper[4870]: E0216 17:44:09.224861 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:44:10 crc kubenswrapper[4870]: I0216 17:44:10.234080 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cd6f754-c555-471e-926f-8f81525bfd2b" path="/var/lib/kubelet/pods/9cd6f754-c555-471e-926f-8f81525bfd2b/volumes" Feb 16 17:44:23 crc kubenswrapper[4870]: E0216 17:44:23.226328 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:44:35 crc kubenswrapper[4870]: I0216 17:44:35.367398 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:44:35 crc kubenswrapper[4870]: I0216 17:44:35.367903 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:44:36 crc kubenswrapper[4870]: E0216 17:44:36.231113 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:44:48 crc kubenswrapper[4870]: E0216 17:44:48.225132 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:45:00 crc kubenswrapper[4870]: I0216 17:45:00.154148 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521065-46m6r"] Feb 16 17:45:00 crc kubenswrapper[4870]: E0216 17:45:00.155156 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd6f754-c555-471e-926f-8f81525bfd2b" containerName="registry-server" Feb 16 17:45:00 crc kubenswrapper[4870]: I0216 17:45:00.155174 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd6f754-c555-471e-926f-8f81525bfd2b" containerName="registry-server" Feb 16 17:45:00 crc kubenswrapper[4870]: E0216 17:45:00.155191 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd6f754-c555-471e-926f-8f81525bfd2b" containerName="extract-utilities" Feb 16 17:45:00 crc kubenswrapper[4870]: I0216 17:45:00.155199 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd6f754-c555-471e-926f-8f81525bfd2b" containerName="extract-utilities" Feb 16 17:45:00 crc kubenswrapper[4870]: E0216 17:45:00.155239 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd6f754-c555-471e-926f-8f81525bfd2b" containerName="extract-content" Feb 16 17:45:00 crc kubenswrapper[4870]: I0216 17:45:00.155247 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd6f754-c555-471e-926f-8f81525bfd2b" containerName="extract-content" Feb 16 17:45:00 crc kubenswrapper[4870]: I0216 17:45:00.155482 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cd6f754-c555-471e-926f-8f81525bfd2b" containerName="registry-server" Feb 16 17:45:00 crc kubenswrapper[4870]: I0216 17:45:00.156552 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-46m6r" Feb 16 17:45:00 crc kubenswrapper[4870]: I0216 17:45:00.160070 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 17:45:00 crc kubenswrapper[4870]: I0216 17:45:00.161114 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 17:45:00 crc kubenswrapper[4870]: I0216 17:45:00.170878 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521065-46m6r"] Feb 16 17:45:00 crc kubenswrapper[4870]: I0216 17:45:00.270691 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2def1cc-19c7-4879-91af-e70795f1942e-config-volume\") pod \"collect-profiles-29521065-46m6r\" (UID: \"e2def1cc-19c7-4879-91af-e70795f1942e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-46m6r" Feb 16 17:45:00 crc kubenswrapper[4870]: I0216 17:45:00.270840 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j2dg\" (UniqueName: \"kubernetes.io/projected/e2def1cc-19c7-4879-91af-e70795f1942e-kube-api-access-2j2dg\") pod \"collect-profiles-29521065-46m6r\" (UID: \"e2def1cc-19c7-4879-91af-e70795f1942e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-46m6r" Feb 16 17:45:00 crc kubenswrapper[4870]: I0216 17:45:00.270904 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2def1cc-19c7-4879-91af-e70795f1942e-secret-volume\") pod \"collect-profiles-29521065-46m6r\" (UID: \"e2def1cc-19c7-4879-91af-e70795f1942e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-46m6r" Feb 16 17:45:00 crc kubenswrapper[4870]: I0216 17:45:00.372929 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j2dg\" (UniqueName: \"kubernetes.io/projected/e2def1cc-19c7-4879-91af-e70795f1942e-kube-api-access-2j2dg\") pod \"collect-profiles-29521065-46m6r\" (UID: \"e2def1cc-19c7-4879-91af-e70795f1942e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-46m6r" Feb 16 17:45:00 crc kubenswrapper[4870]: I0216 17:45:00.373034 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2def1cc-19c7-4879-91af-e70795f1942e-secret-volume\") pod \"collect-profiles-29521065-46m6r\" (UID: \"e2def1cc-19c7-4879-91af-e70795f1942e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-46m6r" Feb 16 17:45:00 crc kubenswrapper[4870]: I0216 17:45:00.373140 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2def1cc-19c7-4879-91af-e70795f1942e-config-volume\") pod \"collect-profiles-29521065-46m6r\" (UID: \"e2def1cc-19c7-4879-91af-e70795f1942e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-46m6r" Feb 16 17:45:00 crc kubenswrapper[4870]: I0216 17:45:00.374186 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2def1cc-19c7-4879-91af-e70795f1942e-config-volume\") pod \"collect-profiles-29521065-46m6r\" (UID: \"e2def1cc-19c7-4879-91af-e70795f1942e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-46m6r" Feb 16 17:45:00 crc kubenswrapper[4870]: I0216 17:45:00.389766 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2def1cc-19c7-4879-91af-e70795f1942e-secret-volume\") pod \"collect-profiles-29521065-46m6r\" (UID: \"e2def1cc-19c7-4879-91af-e70795f1942e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-46m6r" Feb 16 17:45:00 crc kubenswrapper[4870]: I0216 17:45:00.393609 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j2dg\" (UniqueName: \"kubernetes.io/projected/e2def1cc-19c7-4879-91af-e70795f1942e-kube-api-access-2j2dg\") pod \"collect-profiles-29521065-46m6r\" (UID: \"e2def1cc-19c7-4879-91af-e70795f1942e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-46m6r" Feb 16 17:45:00 crc kubenswrapper[4870]: I0216 17:45:00.488819 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-46m6r" Feb 16 17:45:00 crc kubenswrapper[4870]: I0216 17:45:00.953413 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521065-46m6r"] Feb 16 17:45:01 crc kubenswrapper[4870]: I0216 17:45:01.189696 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-46m6r" event={"ID":"e2def1cc-19c7-4879-91af-e70795f1942e","Type":"ContainerStarted","Data":"bb554f189a7cdc6ed59be8f4ac17785ae16ffcf25c5c82c5a8e6a480092693e3"} Feb 16 17:45:01 crc kubenswrapper[4870]: I0216 17:45:01.189755 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-46m6r" event={"ID":"e2def1cc-19c7-4879-91af-e70795f1942e","Type":"ContainerStarted","Data":"50d29aef9ad18c58b611e375c01854f741727e43389b6203bc872e0d84970615"} Feb 16 17:45:01 crc kubenswrapper[4870]: I0216 17:45:01.205449 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-46m6r" podStartSLOduration=1.205432412 podStartE2EDuration="1.205432412s" podCreationTimestamp="2026-02-16 17:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:45:01.205215926 +0000 UTC m=+2705.688680310" watchObservedRunningTime="2026-02-16 17:45:01.205432412 +0000 UTC m=+2705.688896796" Feb 16 17:45:01 crc kubenswrapper[4870]: E0216 17:45:01.224807 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:45:02 crc kubenswrapper[4870]: I0216 17:45:02.199997 4870 generic.go:334] "Generic (PLEG): container finished" podID="e2def1cc-19c7-4879-91af-e70795f1942e" containerID="bb554f189a7cdc6ed59be8f4ac17785ae16ffcf25c5c82c5a8e6a480092693e3" exitCode=0 Feb 16 17:45:02 crc kubenswrapper[4870]: I0216 17:45:02.200107 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-46m6r" event={"ID":"e2def1cc-19c7-4879-91af-e70795f1942e","Type":"ContainerDied","Data":"bb554f189a7cdc6ed59be8f4ac17785ae16ffcf25c5c82c5a8e6a480092693e3"} Feb 16 17:45:03 crc kubenswrapper[4870]: I0216 17:45:03.597048 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-46m6r" Feb 16 17:45:03 crc kubenswrapper[4870]: I0216 17:45:03.745090 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2def1cc-19c7-4879-91af-e70795f1942e-secret-volume\") pod \"e2def1cc-19c7-4879-91af-e70795f1942e\" (UID: \"e2def1cc-19c7-4879-91af-e70795f1942e\") " Feb 16 17:45:03 crc kubenswrapper[4870]: I0216 17:45:03.745214 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2def1cc-19c7-4879-91af-e70795f1942e-config-volume\") pod \"e2def1cc-19c7-4879-91af-e70795f1942e\" (UID: \"e2def1cc-19c7-4879-91af-e70795f1942e\") " Feb 16 17:45:03 crc kubenswrapper[4870]: I0216 17:45:03.745334 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j2dg\" (UniqueName: \"kubernetes.io/projected/e2def1cc-19c7-4879-91af-e70795f1942e-kube-api-access-2j2dg\") pod \"e2def1cc-19c7-4879-91af-e70795f1942e\" (UID: \"e2def1cc-19c7-4879-91af-e70795f1942e\") " Feb 16 17:45:03 crc kubenswrapper[4870]: I0216 17:45:03.745894 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2def1cc-19c7-4879-91af-e70795f1942e-config-volume" (OuterVolumeSpecName: "config-volume") pod "e2def1cc-19c7-4879-91af-e70795f1942e" (UID: "e2def1cc-19c7-4879-91af-e70795f1942e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:45:03 crc kubenswrapper[4870]: I0216 17:45:03.746052 4870 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2def1cc-19c7-4879-91af-e70795f1942e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 17:45:03 crc kubenswrapper[4870]: I0216 17:45:03.751500 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2def1cc-19c7-4879-91af-e70795f1942e-kube-api-access-2j2dg" (OuterVolumeSpecName: "kube-api-access-2j2dg") pod "e2def1cc-19c7-4879-91af-e70795f1942e" (UID: "e2def1cc-19c7-4879-91af-e70795f1942e"). InnerVolumeSpecName "kube-api-access-2j2dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:45:03 crc kubenswrapper[4870]: I0216 17:45:03.753100 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2def1cc-19c7-4879-91af-e70795f1942e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e2def1cc-19c7-4879-91af-e70795f1942e" (UID: "e2def1cc-19c7-4879-91af-e70795f1942e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:45:03 crc kubenswrapper[4870]: I0216 17:45:03.848474 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j2dg\" (UniqueName: \"kubernetes.io/projected/e2def1cc-19c7-4879-91af-e70795f1942e-kube-api-access-2j2dg\") on node \"crc\" DevicePath \"\"" Feb 16 17:45:03 crc kubenswrapper[4870]: I0216 17:45:03.848540 4870 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2def1cc-19c7-4879-91af-e70795f1942e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 17:45:04 crc kubenswrapper[4870]: I0216 17:45:04.218272 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-46m6r" event={"ID":"e2def1cc-19c7-4879-91af-e70795f1942e","Type":"ContainerDied","Data":"50d29aef9ad18c58b611e375c01854f741727e43389b6203bc872e0d84970615"} Feb 16 17:45:04 crc kubenswrapper[4870]: I0216 17:45:04.218311 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50d29aef9ad18c58b611e375c01854f741727e43389b6203bc872e0d84970615" Feb 16 17:45:04 crc kubenswrapper[4870]: I0216 17:45:04.218357 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-46m6r" Feb 16 17:45:04 crc kubenswrapper[4870]: I0216 17:45:04.289506 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf"] Feb 16 17:45:04 crc kubenswrapper[4870]: I0216 17:45:04.296900 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521020-9sztf"] Feb 16 17:45:05 crc kubenswrapper[4870]: I0216 17:45:05.366694 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:45:05 crc kubenswrapper[4870]: I0216 17:45:05.367050 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:45:05 crc kubenswrapper[4870]: I0216 17:45:05.367100 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:45:05 crc kubenswrapper[4870]: I0216 17:45:05.368076 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"de06fc70ac67756f5abffd74866f8bf6e2870cab5fda4981e1540066f7cba926"} pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:45:05 crc kubenswrapper[4870]: I0216 17:45:05.368163 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" containerID="cri-o://de06fc70ac67756f5abffd74866f8bf6e2870cab5fda4981e1540066f7cba926" gracePeriod=600 Feb 16 17:45:06 crc kubenswrapper[4870]: I0216 17:45:06.279026 4870 generic.go:334] "Generic (PLEG): container finished" podID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerID="de06fc70ac67756f5abffd74866f8bf6e2870cab5fda4981e1540066f7cba926" exitCode=0 Feb 16 17:45:06 crc kubenswrapper[4870]: I0216 17:45:06.285077 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad671622-917b-4e62-a887-d2d6e0935f2e" path="/var/lib/kubelet/pods/ad671622-917b-4e62-a887-d2d6e0935f2e/volumes" Feb 16 17:45:06 crc kubenswrapper[4870]: I0216 17:45:06.292481 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerDied","Data":"de06fc70ac67756f5abffd74866f8bf6e2870cab5fda4981e1540066f7cba926"} Feb 16 17:45:06 crc kubenswrapper[4870]: I0216 17:45:06.292534 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerStarted","Data":"7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053"} Feb 16 17:45:06 crc kubenswrapper[4870]: I0216 17:45:06.292556 4870 scope.go:117] "RemoveContainer" containerID="dea05be3d29b3ad692ada1b36f7854a1d39e5e0b8b93db3a792681f49b06636c" Feb 16 17:45:15 crc kubenswrapper[4870]: E0216 17:45:15.225552 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:45:28 crc kubenswrapper[4870]: E0216 17:45:28.225327 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:45:40 crc kubenswrapper[4870]: E0216 17:45:40.225494 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:45:51 crc kubenswrapper[4870]: E0216 17:45:51.225554 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:46:02 crc kubenswrapper[4870]: E0216 17:46:02.224689 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:46:04 crc kubenswrapper[4870]: I0216 17:46:04.714512 4870 scope.go:117] "RemoveContainer" containerID="bd8356847a79aea985e806460936709f6752079e00ec36e216e381c5f0178d6a" Feb 16 17:46:13 crc kubenswrapper[4870]: E0216 17:46:13.226721 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:46:27 crc kubenswrapper[4870]: I0216 17:46:27.225173 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:46:27 crc kubenswrapper[4870]: E0216 17:46:27.353549 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:46:27 crc kubenswrapper[4870]: E0216 17:46:27.353596 4870 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:46:27 crc kubenswrapper[4870]: E0216 17:46:27.353702 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7b2p7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6hkdm_openstack(34a86750-1fff-4add-8462-7ab805ec7f89): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:46:27 crc kubenswrapper[4870]: E0216 17:46:27.354868 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:46:38 crc kubenswrapper[4870]: E0216 17:46:38.225695 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:46:53 crc kubenswrapper[4870]: E0216 17:46:53.225845 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:47:05 crc kubenswrapper[4870]: I0216 17:47:05.366689 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:47:05 crc kubenswrapper[4870]: I0216 17:47:05.367396 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:47:07 crc kubenswrapper[4870]: E0216 17:47:07.224921 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:47:18 crc kubenswrapper[4870]: E0216 17:47:18.229166 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:47:31 crc kubenswrapper[4870]: E0216 17:47:31.224861 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:47:35 crc kubenswrapper[4870]: I0216 17:47:35.367057 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:47:35 crc kubenswrapper[4870]: I0216 17:47:35.367755 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:47:46 crc kubenswrapper[4870]: E0216 17:47:46.230802 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:48:01 crc kubenswrapper[4870]: E0216 17:48:01.225744 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:48:01 crc kubenswrapper[4870]: I0216 17:48:01.304555 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tx6h7"] Feb 16 17:48:01 crc kubenswrapper[4870]: E0216 17:48:01.305120 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2def1cc-19c7-4879-91af-e70795f1942e" containerName="collect-profiles" Feb 16 17:48:01 crc kubenswrapper[4870]: I0216 17:48:01.305140 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2def1cc-19c7-4879-91af-e70795f1942e" containerName="collect-profiles" Feb 16 17:48:01 crc kubenswrapper[4870]: I0216 17:48:01.305462 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2def1cc-19c7-4879-91af-e70795f1942e" containerName="collect-profiles" Feb 16 17:48:01 crc kubenswrapper[4870]: I0216 17:48:01.307436 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tx6h7" Feb 16 17:48:01 crc kubenswrapper[4870]: I0216 17:48:01.316643 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tx6h7"] Feb 16 17:48:01 crc kubenswrapper[4870]: I0216 17:48:01.436420 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744b3a3c-3637-49a1-8138-f35288c979b4-catalog-content\") pod \"certified-operators-tx6h7\" (UID: \"744b3a3c-3637-49a1-8138-f35288c979b4\") " pod="openshift-marketplace/certified-operators-tx6h7" Feb 16 17:48:01 crc kubenswrapper[4870]: I0216 17:48:01.436506 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744b3a3c-3637-49a1-8138-f35288c979b4-utilities\") pod \"certified-operators-tx6h7\" (UID: \"744b3a3c-3637-49a1-8138-f35288c979b4\") " pod="openshift-marketplace/certified-operators-tx6h7" Feb 16 17:48:01 crc kubenswrapper[4870]: I0216 17:48:01.436584 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx84j\" (UniqueName: \"kubernetes.io/projected/744b3a3c-3637-49a1-8138-f35288c979b4-kube-api-access-bx84j\") pod \"certified-operators-tx6h7\" (UID: \"744b3a3c-3637-49a1-8138-f35288c979b4\") " pod="openshift-marketplace/certified-operators-tx6h7" Feb 16 17:48:01 crc kubenswrapper[4870]: I0216 17:48:01.537967 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744b3a3c-3637-49a1-8138-f35288c979b4-utilities\") pod \"certified-operators-tx6h7\" (UID: \"744b3a3c-3637-49a1-8138-f35288c979b4\") " pod="openshift-marketplace/certified-operators-tx6h7" Feb 16 17:48:01 crc kubenswrapper[4870]: I0216 17:48:01.538306 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx84j\" (UniqueName: \"kubernetes.io/projected/744b3a3c-3637-49a1-8138-f35288c979b4-kube-api-access-bx84j\") pod \"certified-operators-tx6h7\" (UID: \"744b3a3c-3637-49a1-8138-f35288c979b4\") " pod="openshift-marketplace/certified-operators-tx6h7" Feb 16 17:48:01 crc kubenswrapper[4870]: I0216 17:48:01.538429 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744b3a3c-3637-49a1-8138-f35288c979b4-catalog-content\") pod \"certified-operators-tx6h7\" (UID: \"744b3a3c-3637-49a1-8138-f35288c979b4\") " pod="openshift-marketplace/certified-operators-tx6h7" Feb 16 17:48:01 crc kubenswrapper[4870]: I0216 17:48:01.538898 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744b3a3c-3637-49a1-8138-f35288c979b4-catalog-content\") pod \"certified-operators-tx6h7\" (UID: \"744b3a3c-3637-49a1-8138-f35288c979b4\") " pod="openshift-marketplace/certified-operators-tx6h7" Feb 16 17:48:01 crc kubenswrapper[4870]: I0216 17:48:01.539195 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744b3a3c-3637-49a1-8138-f35288c979b4-utilities\") pod \"certified-operators-tx6h7\" (UID: \"744b3a3c-3637-49a1-8138-f35288c979b4\") " pod="openshift-marketplace/certified-operators-tx6h7" Feb 16 17:48:01 crc kubenswrapper[4870]: I0216 17:48:01.561372 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx84j\" (UniqueName: \"kubernetes.io/projected/744b3a3c-3637-49a1-8138-f35288c979b4-kube-api-access-bx84j\") pod \"certified-operators-tx6h7\" (UID: \"744b3a3c-3637-49a1-8138-f35288c979b4\") " pod="openshift-marketplace/certified-operators-tx6h7" Feb 16 17:48:01 crc kubenswrapper[4870]: I0216 17:48:01.645721 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tx6h7" Feb 16 17:48:02 crc kubenswrapper[4870]: W0216 17:48:02.223645 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod744b3a3c_3637_49a1_8138_f35288c979b4.slice/crio-9f603854b535b3fba95e60639658b9c383e95300e775cb7f45d1fe7712c6d762 WatchSource:0}: Error finding container 9f603854b535b3fba95e60639658b9c383e95300e775cb7f45d1fe7712c6d762: Status 404 returned error can't find the container with id 9f603854b535b3fba95e60639658b9c383e95300e775cb7f45d1fe7712c6d762 Feb 16 17:48:02 crc kubenswrapper[4870]: I0216 17:48:02.247659 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tx6h7"] Feb 16 17:48:02 crc kubenswrapper[4870]: I0216 17:48:02.960096 4870 generic.go:334] "Generic (PLEG): container finished" podID="744b3a3c-3637-49a1-8138-f35288c979b4" containerID="3c24157f669c85d06f1835096544080ca6be234dcb0aae148c5d26e28ee980ad" exitCode=0 Feb 16 17:48:02 crc kubenswrapper[4870]: I0216 17:48:02.960443 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx6h7" event={"ID":"744b3a3c-3637-49a1-8138-f35288c979b4","Type":"ContainerDied","Data":"3c24157f669c85d06f1835096544080ca6be234dcb0aae148c5d26e28ee980ad"} Feb 16 17:48:02 crc kubenswrapper[4870]: I0216 17:48:02.960476 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx6h7" event={"ID":"744b3a3c-3637-49a1-8138-f35288c979b4","Type":"ContainerStarted","Data":"9f603854b535b3fba95e60639658b9c383e95300e775cb7f45d1fe7712c6d762"} Feb 16 17:48:03 crc kubenswrapper[4870]: I0216 17:48:03.969642 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx6h7" event={"ID":"744b3a3c-3637-49a1-8138-f35288c979b4","Type":"ContainerStarted","Data":"3c189ebca163b45615278738ed28e4be6e15b4f14dc387652268e8d22eace4d1"} Feb 16 17:48:04 crc kubenswrapper[4870]: I0216 17:48:04.982162 4870 generic.go:334] "Generic (PLEG): container finished" podID="744b3a3c-3637-49a1-8138-f35288c979b4" containerID="3c189ebca163b45615278738ed28e4be6e15b4f14dc387652268e8d22eace4d1" exitCode=0 Feb 16 17:48:04 crc kubenswrapper[4870]: I0216 17:48:04.982219 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx6h7" event={"ID":"744b3a3c-3637-49a1-8138-f35288c979b4","Type":"ContainerDied","Data":"3c189ebca163b45615278738ed28e4be6e15b4f14dc387652268e8d22eace4d1"} Feb 16 17:48:05 crc kubenswrapper[4870]: I0216 17:48:05.366783 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:48:05 crc kubenswrapper[4870]: I0216 17:48:05.366873 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:48:05 crc kubenswrapper[4870]: I0216 17:48:05.366937 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:48:05 crc kubenswrapper[4870]: I0216 17:48:05.368092 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053"} pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:48:05 crc kubenswrapper[4870]: I0216 17:48:05.368198 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" containerID="cri-o://7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" gracePeriod=600 Feb 16 17:48:05 crc kubenswrapper[4870]: E0216 17:48:05.496303 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:48:05 crc kubenswrapper[4870]: I0216 17:48:05.997572 4870 generic.go:334] "Generic (PLEG): container finished" podID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" exitCode=0 Feb 16 17:48:05 crc kubenswrapper[4870]: I0216 17:48:05.997691 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerDied","Data":"7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053"} Feb 16 17:48:05 crc kubenswrapper[4870]: I0216 17:48:05.998024 4870 scope.go:117] "RemoveContainer" containerID="de06fc70ac67756f5abffd74866f8bf6e2870cab5fda4981e1540066f7cba926" Feb 16 17:48:05 crc kubenswrapper[4870]: I0216 17:48:05.999009 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:48:06 crc kubenswrapper[4870]: E0216 17:48:05.999560 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:48:06 crc kubenswrapper[4870]: I0216 17:48:06.003109 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx6h7" event={"ID":"744b3a3c-3637-49a1-8138-f35288c979b4","Type":"ContainerStarted","Data":"ff966c8876c1f7522876f07fbcb322f62bc61ea2266ac9ad94a730f3147bf1f9"} Feb 16 17:48:06 crc kubenswrapper[4870]: I0216 17:48:06.054161 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tx6h7" podStartSLOduration=2.579767481 podStartE2EDuration="5.054139894s" podCreationTimestamp="2026-02-16 17:48:01 +0000 UTC" firstStartedPulling="2026-02-16 17:48:02.963343217 +0000 UTC m=+2887.446807601" lastFinishedPulling="2026-02-16 17:48:05.43771563 +0000 UTC m=+2889.921180014" observedRunningTime="2026-02-16 17:48:06.048221156 +0000 UTC m=+2890.531685540" watchObservedRunningTime="2026-02-16 17:48:06.054139894 +0000 UTC m=+2890.537604278" Feb 16 17:48:11 crc kubenswrapper[4870]: I0216 17:48:11.646556 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tx6h7" Feb 16 17:48:11 crc kubenswrapper[4870]: I0216 17:48:11.647356 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tx6h7" Feb 16 17:48:11 crc kubenswrapper[4870]: I0216 17:48:11.706362 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tx6h7" Feb 16 17:48:12 crc kubenswrapper[4870]: I0216 17:48:12.124487 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tx6h7" Feb 16 17:48:12 crc kubenswrapper[4870]: I0216 17:48:12.176483 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tx6h7"] Feb 16 17:48:14 crc kubenswrapper[4870]: I0216 17:48:14.088939 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tx6h7" podUID="744b3a3c-3637-49a1-8138-f35288c979b4" containerName="registry-server" containerID="cri-o://ff966c8876c1f7522876f07fbcb322f62bc61ea2266ac9ad94a730f3147bf1f9" gracePeriod=2 Feb 16 17:48:14 crc kubenswrapper[4870]: I0216 17:48:14.620683 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tx6h7" Feb 16 17:48:14 crc kubenswrapper[4870]: I0216 17:48:14.721071 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744b3a3c-3637-49a1-8138-f35288c979b4-utilities\") pod \"744b3a3c-3637-49a1-8138-f35288c979b4\" (UID: \"744b3a3c-3637-49a1-8138-f35288c979b4\") " Feb 16 17:48:14 crc kubenswrapper[4870]: I0216 17:48:14.721560 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744b3a3c-3637-49a1-8138-f35288c979b4-catalog-content\") pod \"744b3a3c-3637-49a1-8138-f35288c979b4\" (UID: \"744b3a3c-3637-49a1-8138-f35288c979b4\") " Feb 16 17:48:14 crc kubenswrapper[4870]: I0216 17:48:14.721675 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx84j\" (UniqueName: \"kubernetes.io/projected/744b3a3c-3637-49a1-8138-f35288c979b4-kube-api-access-bx84j\") pod \"744b3a3c-3637-49a1-8138-f35288c979b4\" (UID: \"744b3a3c-3637-49a1-8138-f35288c979b4\") " Feb 16 17:48:14 crc kubenswrapper[4870]: I0216 17:48:14.722097 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/744b3a3c-3637-49a1-8138-f35288c979b4-utilities" (OuterVolumeSpecName: "utilities") pod "744b3a3c-3637-49a1-8138-f35288c979b4" (UID: "744b3a3c-3637-49a1-8138-f35288c979b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:48:14 crc kubenswrapper[4870]: I0216 17:48:14.722444 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744b3a3c-3637-49a1-8138-f35288c979b4-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:48:14 crc kubenswrapper[4870]: I0216 17:48:14.729744 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/744b3a3c-3637-49a1-8138-f35288c979b4-kube-api-access-bx84j" (OuterVolumeSpecName: "kube-api-access-bx84j") pod "744b3a3c-3637-49a1-8138-f35288c979b4" (UID: "744b3a3c-3637-49a1-8138-f35288c979b4"). InnerVolumeSpecName "kube-api-access-bx84j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:48:14 crc kubenswrapper[4870]: I0216 17:48:14.827823 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx84j\" (UniqueName: \"kubernetes.io/projected/744b3a3c-3637-49a1-8138-f35288c979b4-kube-api-access-bx84j\") on node \"crc\" DevicePath \"\"" Feb 16 17:48:14 crc kubenswrapper[4870]: I0216 17:48:14.916821 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/744b3a3c-3637-49a1-8138-f35288c979b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "744b3a3c-3637-49a1-8138-f35288c979b4" (UID: "744b3a3c-3637-49a1-8138-f35288c979b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:48:14 crc kubenswrapper[4870]: I0216 17:48:14.930043 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744b3a3c-3637-49a1-8138-f35288c979b4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:48:15 crc kubenswrapper[4870]: I0216 17:48:15.097687 4870 generic.go:334] "Generic (PLEG): container finished" podID="744b3a3c-3637-49a1-8138-f35288c979b4" containerID="ff966c8876c1f7522876f07fbcb322f62bc61ea2266ac9ad94a730f3147bf1f9" exitCode=0 Feb 16 17:48:15 crc kubenswrapper[4870]: I0216 17:48:15.097737 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx6h7" event={"ID":"744b3a3c-3637-49a1-8138-f35288c979b4","Type":"ContainerDied","Data":"ff966c8876c1f7522876f07fbcb322f62bc61ea2266ac9ad94a730f3147bf1f9"} Feb 16 17:48:15 crc kubenswrapper[4870]: I0216 17:48:15.097751 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tx6h7" Feb 16 17:48:15 crc kubenswrapper[4870]: I0216 17:48:15.097769 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx6h7" event={"ID":"744b3a3c-3637-49a1-8138-f35288c979b4","Type":"ContainerDied","Data":"9f603854b535b3fba95e60639658b9c383e95300e775cb7f45d1fe7712c6d762"} Feb 16 17:48:15 crc kubenswrapper[4870]: I0216 17:48:15.097792 4870 scope.go:117] "RemoveContainer" containerID="ff966c8876c1f7522876f07fbcb322f62bc61ea2266ac9ad94a730f3147bf1f9" Feb 16 17:48:15 crc kubenswrapper[4870]: I0216 17:48:15.118937 4870 scope.go:117] "RemoveContainer" containerID="3c189ebca163b45615278738ed28e4be6e15b4f14dc387652268e8d22eace4d1" Feb 16 17:48:15 crc kubenswrapper[4870]: I0216 17:48:15.133561 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tx6h7"] Feb 16 17:48:15 crc kubenswrapper[4870]: I0216 17:48:15.145260 4870 scope.go:117] "RemoveContainer" containerID="3c24157f669c85d06f1835096544080ca6be234dcb0aae148c5d26e28ee980ad" Feb 16 17:48:15 crc kubenswrapper[4870]: I0216 17:48:15.146145 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tx6h7"] Feb 16 17:48:15 crc kubenswrapper[4870]: I0216 17:48:15.189298 4870 scope.go:117] "RemoveContainer" containerID="ff966c8876c1f7522876f07fbcb322f62bc61ea2266ac9ad94a730f3147bf1f9" Feb 16 17:48:15 crc kubenswrapper[4870]: E0216 17:48:15.190018 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff966c8876c1f7522876f07fbcb322f62bc61ea2266ac9ad94a730f3147bf1f9\": container with ID starting with ff966c8876c1f7522876f07fbcb322f62bc61ea2266ac9ad94a730f3147bf1f9 not found: ID does not exist" containerID="ff966c8876c1f7522876f07fbcb322f62bc61ea2266ac9ad94a730f3147bf1f9" Feb 16 17:48:15 crc kubenswrapper[4870]: I0216 17:48:15.190059 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff966c8876c1f7522876f07fbcb322f62bc61ea2266ac9ad94a730f3147bf1f9"} err="failed to get container status \"ff966c8876c1f7522876f07fbcb322f62bc61ea2266ac9ad94a730f3147bf1f9\": rpc error: code = NotFound desc = could not find container \"ff966c8876c1f7522876f07fbcb322f62bc61ea2266ac9ad94a730f3147bf1f9\": container with ID starting with ff966c8876c1f7522876f07fbcb322f62bc61ea2266ac9ad94a730f3147bf1f9 not found: ID does not exist" Feb 16 17:48:15 crc kubenswrapper[4870]: I0216 17:48:15.190086 4870 scope.go:117] "RemoveContainer" containerID="3c189ebca163b45615278738ed28e4be6e15b4f14dc387652268e8d22eace4d1" Feb 16 17:48:15 crc kubenswrapper[4870]: E0216 17:48:15.190626 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c189ebca163b45615278738ed28e4be6e15b4f14dc387652268e8d22eace4d1\": container with ID starting with 3c189ebca163b45615278738ed28e4be6e15b4f14dc387652268e8d22eace4d1 not found: ID does not exist" containerID="3c189ebca163b45615278738ed28e4be6e15b4f14dc387652268e8d22eace4d1" Feb 16 17:48:15 crc kubenswrapper[4870]: I0216 17:48:15.190666 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c189ebca163b45615278738ed28e4be6e15b4f14dc387652268e8d22eace4d1"} err="failed to get container status \"3c189ebca163b45615278738ed28e4be6e15b4f14dc387652268e8d22eace4d1\": rpc error: code = NotFound desc = could not find container \"3c189ebca163b45615278738ed28e4be6e15b4f14dc387652268e8d22eace4d1\": container with ID starting with 3c189ebca163b45615278738ed28e4be6e15b4f14dc387652268e8d22eace4d1 not found: ID does not exist" Feb 16 17:48:15 crc kubenswrapper[4870]: I0216 17:48:15.190695 4870 scope.go:117] "RemoveContainer" containerID="3c24157f669c85d06f1835096544080ca6be234dcb0aae148c5d26e28ee980ad" Feb 16 17:48:15 crc kubenswrapper[4870]: E0216 17:48:15.191047 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c24157f669c85d06f1835096544080ca6be234dcb0aae148c5d26e28ee980ad\": container with ID starting with 3c24157f669c85d06f1835096544080ca6be234dcb0aae148c5d26e28ee980ad not found: ID does not exist" containerID="3c24157f669c85d06f1835096544080ca6be234dcb0aae148c5d26e28ee980ad" Feb 16 17:48:15 crc kubenswrapper[4870]: I0216 17:48:15.191101 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c24157f669c85d06f1835096544080ca6be234dcb0aae148c5d26e28ee980ad"} err="failed to get container status \"3c24157f669c85d06f1835096544080ca6be234dcb0aae148c5d26e28ee980ad\": rpc error: code = NotFound desc = could not find container \"3c24157f669c85d06f1835096544080ca6be234dcb0aae148c5d26e28ee980ad\": container with ID starting with 3c24157f669c85d06f1835096544080ca6be234dcb0aae148c5d26e28ee980ad not found: ID does not exist" Feb 16 17:48:16 crc kubenswrapper[4870]: I0216 17:48:16.222830 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:48:16 crc kubenswrapper[4870]: E0216 17:48:16.223445 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:48:16 crc kubenswrapper[4870]: E0216 17:48:16.230005 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:48:16 crc kubenswrapper[4870]: I0216 17:48:16.234635 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="744b3a3c-3637-49a1-8138-f35288c979b4" path="/var/lib/kubelet/pods/744b3a3c-3637-49a1-8138-f35288c979b4/volumes" Feb 16 17:48:28 crc kubenswrapper[4870]: E0216 17:48:28.227585 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:48:29 crc kubenswrapper[4870]: I0216 17:48:29.223097 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:48:29 crc kubenswrapper[4870]: E0216 17:48:29.223776 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:48:41 crc kubenswrapper[4870]: E0216 17:48:41.225488 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:48:43 crc kubenswrapper[4870]: I0216 17:48:43.223512 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:48:43 crc kubenswrapper[4870]: E0216 17:48:43.224256 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:48:54 crc kubenswrapper[4870]: E0216 17:48:54.225176 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:48:55 crc kubenswrapper[4870]: I0216 17:48:55.222636 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:48:55 crc kubenswrapper[4870]: E0216 17:48:55.223185 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:49:08 crc kubenswrapper[4870]: I0216 17:49:08.223411 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:49:08 crc kubenswrapper[4870]: E0216 17:49:08.224603 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:49:08 crc kubenswrapper[4870]: E0216 17:49:08.228866 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:49:20 crc kubenswrapper[4870]: E0216 17:49:20.225748 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:49:22 crc kubenswrapper[4870]: I0216 17:49:22.222649 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:49:22 crc kubenswrapper[4870]: E0216 17:49:22.223212 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:49:34 crc kubenswrapper[4870]: E0216 17:49:34.224527 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:49:36 crc kubenswrapper[4870]: I0216 17:49:36.231451 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:49:36 crc kubenswrapper[4870]: E0216 17:49:36.232075 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:49:46 crc kubenswrapper[4870]: E0216 17:49:46.230838 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:49:47 crc kubenswrapper[4870]: I0216 17:49:47.811421 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jd9s2/must-gather-wl9b5"] Feb 16 17:49:47 crc kubenswrapper[4870]: E0216 17:49:47.812130 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744b3a3c-3637-49a1-8138-f35288c979b4" containerName="extract-utilities" Feb 16 17:49:47 crc kubenswrapper[4870]: I0216 17:49:47.812144 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="744b3a3c-3637-49a1-8138-f35288c979b4" containerName="extract-utilities" Feb 16 17:49:47 crc kubenswrapper[4870]: E0216 17:49:47.812162 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744b3a3c-3637-49a1-8138-f35288c979b4" containerName="extract-content" Feb 16 17:49:47 crc kubenswrapper[4870]: I0216 17:49:47.812170 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="744b3a3c-3637-49a1-8138-f35288c979b4" containerName="extract-content" Feb 16 17:49:47 crc kubenswrapper[4870]: E0216 17:49:47.812181 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744b3a3c-3637-49a1-8138-f35288c979b4" containerName="registry-server" Feb 16 17:49:47 crc kubenswrapper[4870]: I0216 17:49:47.812187 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="744b3a3c-3637-49a1-8138-f35288c979b4" containerName="registry-server" Feb 16 17:49:47 crc kubenswrapper[4870]: I0216 17:49:47.812379 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="744b3a3c-3637-49a1-8138-f35288c979b4" containerName="registry-server" Feb 16 17:49:47 crc kubenswrapper[4870]: I0216 17:49:47.813534 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jd9s2/must-gather-wl9b5" Feb 16 17:49:47 crc kubenswrapper[4870]: I0216 17:49:47.817813 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jd9s2"/"kube-root-ca.crt" Feb 16 17:49:47 crc kubenswrapper[4870]: I0216 17:49:47.819126 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jd9s2"/"openshift-service-ca.crt" Feb 16 17:49:47 crc kubenswrapper[4870]: I0216 17:49:47.848258 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jd9s2/must-gather-wl9b5"] Feb 16 17:49:47 crc kubenswrapper[4870]: I0216 17:49:47.910524 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxp7q\" (UniqueName: \"kubernetes.io/projected/85f69a46-bfb5-43f0-829b-e54d8ea13b95-kube-api-access-rxp7q\") pod \"must-gather-wl9b5\" (UID: \"85f69a46-bfb5-43f0-829b-e54d8ea13b95\") " pod="openshift-must-gather-jd9s2/must-gather-wl9b5" Feb 16 17:49:47 crc kubenswrapper[4870]: I0216 17:49:47.910973 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/85f69a46-bfb5-43f0-829b-e54d8ea13b95-must-gather-output\") pod \"must-gather-wl9b5\" (UID: \"85f69a46-bfb5-43f0-829b-e54d8ea13b95\") " pod="openshift-must-gather-jd9s2/must-gather-wl9b5" Feb 16 17:49:48 crc kubenswrapper[4870]: I0216 17:49:48.013423 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxp7q\" (UniqueName: \"kubernetes.io/projected/85f69a46-bfb5-43f0-829b-e54d8ea13b95-kube-api-access-rxp7q\") pod \"must-gather-wl9b5\" (UID: \"85f69a46-bfb5-43f0-829b-e54d8ea13b95\") " pod="openshift-must-gather-jd9s2/must-gather-wl9b5" Feb 16 17:49:48 crc kubenswrapper[4870]: I0216 17:49:48.013793 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/85f69a46-bfb5-43f0-829b-e54d8ea13b95-must-gather-output\") pod \"must-gather-wl9b5\" (UID: \"85f69a46-bfb5-43f0-829b-e54d8ea13b95\") " pod="openshift-must-gather-jd9s2/must-gather-wl9b5" Feb 16 17:49:48 crc kubenswrapper[4870]: I0216 17:49:48.014327 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/85f69a46-bfb5-43f0-829b-e54d8ea13b95-must-gather-output\") pod \"must-gather-wl9b5\" (UID: \"85f69a46-bfb5-43f0-829b-e54d8ea13b95\") " pod="openshift-must-gather-jd9s2/must-gather-wl9b5" Feb 16 17:49:48 crc kubenswrapper[4870]: I0216 17:49:48.038144 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxp7q\" (UniqueName: \"kubernetes.io/projected/85f69a46-bfb5-43f0-829b-e54d8ea13b95-kube-api-access-rxp7q\") pod \"must-gather-wl9b5\" (UID: \"85f69a46-bfb5-43f0-829b-e54d8ea13b95\") " pod="openshift-must-gather-jd9s2/must-gather-wl9b5" Feb 16 17:49:48 crc kubenswrapper[4870]: I0216 17:49:48.133536 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jd9s2/must-gather-wl9b5" Feb 16 17:49:48 crc kubenswrapper[4870]: I0216 17:49:48.627789 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jd9s2/must-gather-wl9b5"] Feb 16 17:49:48 crc kubenswrapper[4870]: W0216 17:49:48.638072 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85f69a46_bfb5_43f0_829b_e54d8ea13b95.slice/crio-8a3626386407c2c1a845063b96826d03b2f74be100a8bd269145b0e1d6ec436a WatchSource:0}: Error finding container 8a3626386407c2c1a845063b96826d03b2f74be100a8bd269145b0e1d6ec436a: Status 404 returned error can't find the container with id 8a3626386407c2c1a845063b96826d03b2f74be100a8bd269145b0e1d6ec436a Feb 16 17:49:49 crc kubenswrapper[4870]: I0216 17:49:49.041753 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jd9s2/must-gather-wl9b5" event={"ID":"85f69a46-bfb5-43f0-829b-e54d8ea13b95","Type":"ContainerStarted","Data":"8a3626386407c2c1a845063b96826d03b2f74be100a8bd269145b0e1d6ec436a"} Feb 16 17:49:50 crc kubenswrapper[4870]: I0216 17:49:50.227254 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:49:50 crc kubenswrapper[4870]: E0216 17:49:50.227641 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:49:56 crc kubenswrapper[4870]: I0216 17:49:56.130682 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jd9s2/must-gather-wl9b5" event={"ID":"85f69a46-bfb5-43f0-829b-e54d8ea13b95","Type":"ContainerStarted","Data":"387ee1a7d94af3f34840651adc64e97c70ccad7039ca033be98716090bf666a7"} Feb 16 17:49:56 crc kubenswrapper[4870]: I0216 17:49:56.131890 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jd9s2/must-gather-wl9b5" event={"ID":"85f69a46-bfb5-43f0-829b-e54d8ea13b95","Type":"ContainerStarted","Data":"da74a5ef8533a7e17896bdca5375a98f1fae09c2160cd2544b7229c201801447"} Feb 16 17:49:56 crc kubenswrapper[4870]: I0216 17:49:56.158381 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jd9s2/must-gather-wl9b5" podStartSLOduration=2.514828783 podStartE2EDuration="9.158363129s" podCreationTimestamp="2026-02-16 17:49:47 +0000 UTC" firstStartedPulling="2026-02-16 17:49:48.642300248 +0000 UTC m=+2993.125764632" lastFinishedPulling="2026-02-16 17:49:55.285834594 +0000 UTC m=+2999.769298978" observedRunningTime="2026-02-16 17:49:56.150473865 +0000 UTC m=+3000.633938249" watchObservedRunningTime="2026-02-16 17:49:56.158363129 +0000 UTC m=+3000.641827513" Feb 16 17:49:59 crc kubenswrapper[4870]: E0216 17:49:59.481733 4870 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.204:48024->38.102.83.204:35655: write tcp 38.102.83.204:48024->38.102.83.204:35655: write: connection reset by peer Feb 16 17:50:00 crc kubenswrapper[4870]: I0216 17:50:00.185181 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jd9s2/crc-debug-fmfhc"] Feb 16 17:50:00 crc kubenswrapper[4870]: I0216 17:50:00.186686 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jd9s2/crc-debug-fmfhc" Feb 16 17:50:00 crc kubenswrapper[4870]: I0216 17:50:00.189029 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jd9s2"/"default-dockercfg-pzxcp" Feb 16 17:50:00 crc kubenswrapper[4870]: I0216 17:50:00.299060 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/20f73e39-99ca-45ca-8084-f7dce723a1ab-host\") pod \"crc-debug-fmfhc\" (UID: \"20f73e39-99ca-45ca-8084-f7dce723a1ab\") " pod="openshift-must-gather-jd9s2/crc-debug-fmfhc" Feb 16 17:50:00 crc kubenswrapper[4870]: I0216 17:50:00.299140 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfn7s\" (UniqueName: \"kubernetes.io/projected/20f73e39-99ca-45ca-8084-f7dce723a1ab-kube-api-access-nfn7s\") pod \"crc-debug-fmfhc\" (UID: \"20f73e39-99ca-45ca-8084-f7dce723a1ab\") " pod="openshift-must-gather-jd9s2/crc-debug-fmfhc" Feb 16 17:50:00 crc kubenswrapper[4870]: I0216 17:50:00.401037 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/20f73e39-99ca-45ca-8084-f7dce723a1ab-host\") pod \"crc-debug-fmfhc\" (UID: \"20f73e39-99ca-45ca-8084-f7dce723a1ab\") " pod="openshift-must-gather-jd9s2/crc-debug-fmfhc" Feb 16 17:50:00 crc kubenswrapper[4870]: I0216 17:50:00.401459 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfn7s\" (UniqueName: \"kubernetes.io/projected/20f73e39-99ca-45ca-8084-f7dce723a1ab-kube-api-access-nfn7s\") pod \"crc-debug-fmfhc\" (UID: \"20f73e39-99ca-45ca-8084-f7dce723a1ab\") " pod="openshift-must-gather-jd9s2/crc-debug-fmfhc" Feb 16 17:50:00 crc kubenswrapper[4870]: I0216 17:50:00.401410 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/20f73e39-99ca-45ca-8084-f7dce723a1ab-host\") pod \"crc-debug-fmfhc\" (UID: \"20f73e39-99ca-45ca-8084-f7dce723a1ab\") " pod="openshift-must-gather-jd9s2/crc-debug-fmfhc" Feb 16 17:50:00 crc kubenswrapper[4870]: I0216 17:50:00.436423 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfn7s\" (UniqueName: \"kubernetes.io/projected/20f73e39-99ca-45ca-8084-f7dce723a1ab-kube-api-access-nfn7s\") pod \"crc-debug-fmfhc\" (UID: \"20f73e39-99ca-45ca-8084-f7dce723a1ab\") " pod="openshift-must-gather-jd9s2/crc-debug-fmfhc" Feb 16 17:50:00 crc kubenswrapper[4870]: I0216 17:50:00.508441 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jd9s2/crc-debug-fmfhc" Feb 16 17:50:01 crc kubenswrapper[4870]: I0216 17:50:01.193577 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jd9s2/crc-debug-fmfhc" event={"ID":"20f73e39-99ca-45ca-8084-f7dce723a1ab","Type":"ContainerStarted","Data":"585911d87d1d1168482ffeeff9888263c1cebbab4bf85feffa5f83e940af2133"} Feb 16 17:50:01 crc kubenswrapper[4870]: E0216 17:50:01.224393 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:50:05 crc kubenswrapper[4870]: I0216 17:50:05.222813 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:50:05 crc kubenswrapper[4870]: E0216 17:50:05.223521 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:50:13 crc kubenswrapper[4870]: E0216 17:50:13.226109 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:50:14 crc kubenswrapper[4870]: I0216 17:50:14.410651 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jd9s2/crc-debug-fmfhc" event={"ID":"20f73e39-99ca-45ca-8084-f7dce723a1ab","Type":"ContainerStarted","Data":"086a5db785aae46ca55d534a05407b478706d1d8c1667f191c3ebe4c997c9d60"} Feb 16 17:50:14 crc kubenswrapper[4870]: I0216 17:50:14.431545 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jd9s2/crc-debug-fmfhc" podStartSLOduration=1.317565959 podStartE2EDuration="14.43151699s" podCreationTimestamp="2026-02-16 17:50:00 +0000 UTC" firstStartedPulling="2026-02-16 17:50:00.551644645 +0000 UTC m=+3005.035109029" lastFinishedPulling="2026-02-16 17:50:13.665595676 +0000 UTC m=+3018.149060060" observedRunningTime="2026-02-16 17:50:14.422746611 +0000 UTC m=+3018.906210995" watchObservedRunningTime="2026-02-16 17:50:14.43151699 +0000 UTC m=+3018.914981374" Feb 16 17:50:16 crc kubenswrapper[4870]: I0216 17:50:16.231610 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:50:16 crc kubenswrapper[4870]: E0216 17:50:16.232565 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:50:28 crc kubenswrapper[4870]: I0216 17:50:28.225163 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:50:28 crc kubenswrapper[4870]: E0216 17:50:28.225840 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:50:28 crc kubenswrapper[4870]: E0216 17:50:28.232798 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:50:31 crc kubenswrapper[4870]: I0216 17:50:31.571500 4870 generic.go:334] "Generic (PLEG): container finished" podID="20f73e39-99ca-45ca-8084-f7dce723a1ab" containerID="086a5db785aae46ca55d534a05407b478706d1d8c1667f191c3ebe4c997c9d60" exitCode=0 Feb 16 17:50:31 crc kubenswrapper[4870]: I0216 17:50:31.571558 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jd9s2/crc-debug-fmfhc" event={"ID":"20f73e39-99ca-45ca-8084-f7dce723a1ab","Type":"ContainerDied","Data":"086a5db785aae46ca55d534a05407b478706d1d8c1667f191c3ebe4c997c9d60"} Feb 16 17:50:32 crc kubenswrapper[4870]: I0216 17:50:32.713595 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jd9s2/crc-debug-fmfhc" Feb 16 17:50:32 crc kubenswrapper[4870]: I0216 17:50:32.746715 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jd9s2/crc-debug-fmfhc"] Feb 16 17:50:32 crc kubenswrapper[4870]: I0216 17:50:32.755067 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jd9s2/crc-debug-fmfhc"] Feb 16 17:50:32 crc kubenswrapper[4870]: I0216 17:50:32.806681 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfn7s\" (UniqueName: \"kubernetes.io/projected/20f73e39-99ca-45ca-8084-f7dce723a1ab-kube-api-access-nfn7s\") pod \"20f73e39-99ca-45ca-8084-f7dce723a1ab\" (UID: \"20f73e39-99ca-45ca-8084-f7dce723a1ab\") " Feb 16 17:50:32 crc kubenswrapper[4870]: I0216 17:50:32.806847 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/20f73e39-99ca-45ca-8084-f7dce723a1ab-host\") pod \"20f73e39-99ca-45ca-8084-f7dce723a1ab\" (UID: \"20f73e39-99ca-45ca-8084-f7dce723a1ab\") " Feb 16 17:50:32 crc kubenswrapper[4870]: I0216 17:50:32.806896 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20f73e39-99ca-45ca-8084-f7dce723a1ab-host" (OuterVolumeSpecName: "host") pod "20f73e39-99ca-45ca-8084-f7dce723a1ab" (UID: "20f73e39-99ca-45ca-8084-f7dce723a1ab"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:50:32 crc kubenswrapper[4870]: I0216 17:50:32.807432 4870 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/20f73e39-99ca-45ca-8084-f7dce723a1ab-host\") on node \"crc\" DevicePath \"\"" Feb 16 17:50:32 crc kubenswrapper[4870]: I0216 17:50:32.812373 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20f73e39-99ca-45ca-8084-f7dce723a1ab-kube-api-access-nfn7s" (OuterVolumeSpecName: "kube-api-access-nfn7s") pod "20f73e39-99ca-45ca-8084-f7dce723a1ab" (UID: "20f73e39-99ca-45ca-8084-f7dce723a1ab"). InnerVolumeSpecName "kube-api-access-nfn7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:50:32 crc kubenswrapper[4870]: I0216 17:50:32.909342 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfn7s\" (UniqueName: \"kubernetes.io/projected/20f73e39-99ca-45ca-8084-f7dce723a1ab-kube-api-access-nfn7s\") on node \"crc\" DevicePath \"\"" Feb 16 17:50:33 crc kubenswrapper[4870]: I0216 17:50:33.604243 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="585911d87d1d1168482ffeeff9888263c1cebbab4bf85feffa5f83e940af2133" Feb 16 17:50:33 crc kubenswrapper[4870]: I0216 17:50:33.604324 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jd9s2/crc-debug-fmfhc" Feb 16 17:50:33 crc kubenswrapper[4870]: I0216 17:50:33.997499 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jd9s2/crc-debug-ccg5z"] Feb 16 17:50:33 crc kubenswrapper[4870]: E0216 17:50:33.998039 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f73e39-99ca-45ca-8084-f7dce723a1ab" containerName="container-00" Feb 16 17:50:33 crc kubenswrapper[4870]: I0216 17:50:33.998059 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f73e39-99ca-45ca-8084-f7dce723a1ab" containerName="container-00" Feb 16 17:50:33 crc kubenswrapper[4870]: I0216 17:50:33.998565 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f73e39-99ca-45ca-8084-f7dce723a1ab" containerName="container-00" Feb 16 17:50:33 crc kubenswrapper[4870]: I0216 17:50:33.999478 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jd9s2/crc-debug-ccg5z" Feb 16 17:50:34 crc kubenswrapper[4870]: I0216 17:50:34.006258 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jd9s2"/"default-dockercfg-pzxcp" Feb 16 17:50:34 crc kubenswrapper[4870]: I0216 17:50:34.035711 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/afd0b277-02c6-47c0-8586-b6f05c6b4576-host\") pod \"crc-debug-ccg5z\" (UID: \"afd0b277-02c6-47c0-8586-b6f05c6b4576\") " pod="openshift-must-gather-jd9s2/crc-debug-ccg5z" Feb 16 17:50:34 crc kubenswrapper[4870]: I0216 17:50:34.035902 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbgpl\" (UniqueName: \"kubernetes.io/projected/afd0b277-02c6-47c0-8586-b6f05c6b4576-kube-api-access-mbgpl\") pod \"crc-debug-ccg5z\" (UID: \"afd0b277-02c6-47c0-8586-b6f05c6b4576\") " pod="openshift-must-gather-jd9s2/crc-debug-ccg5z" Feb 16 17:50:34 crc kubenswrapper[4870]: I0216 17:50:34.137231 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/afd0b277-02c6-47c0-8586-b6f05c6b4576-host\") pod \"crc-debug-ccg5z\" (UID: \"afd0b277-02c6-47c0-8586-b6f05c6b4576\") " pod="openshift-must-gather-jd9s2/crc-debug-ccg5z" Feb 16 17:50:34 crc kubenswrapper[4870]: I0216 17:50:34.137344 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/afd0b277-02c6-47c0-8586-b6f05c6b4576-host\") pod \"crc-debug-ccg5z\" (UID: \"afd0b277-02c6-47c0-8586-b6f05c6b4576\") " pod="openshift-must-gather-jd9s2/crc-debug-ccg5z" Feb 16 17:50:34 crc kubenswrapper[4870]: I0216 17:50:34.137354 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbgpl\" (UniqueName: \"kubernetes.io/projected/afd0b277-02c6-47c0-8586-b6f05c6b4576-kube-api-access-mbgpl\") pod \"crc-debug-ccg5z\" (UID: \"afd0b277-02c6-47c0-8586-b6f05c6b4576\") " pod="openshift-must-gather-jd9s2/crc-debug-ccg5z" Feb 16 17:50:34 crc kubenswrapper[4870]: I0216 17:50:34.154403 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbgpl\" (UniqueName: \"kubernetes.io/projected/afd0b277-02c6-47c0-8586-b6f05c6b4576-kube-api-access-mbgpl\") pod \"crc-debug-ccg5z\" (UID: \"afd0b277-02c6-47c0-8586-b6f05c6b4576\") " pod="openshift-must-gather-jd9s2/crc-debug-ccg5z" Feb 16 17:50:34 crc kubenswrapper[4870]: I0216 17:50:34.233910 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20f73e39-99ca-45ca-8084-f7dce723a1ab" path="/var/lib/kubelet/pods/20f73e39-99ca-45ca-8084-f7dce723a1ab/volumes" Feb 16 17:50:34 crc kubenswrapper[4870]: I0216 17:50:34.321857 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jd9s2/crc-debug-ccg5z" Feb 16 17:50:34 crc kubenswrapper[4870]: W0216 17:50:34.356102 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafd0b277_02c6_47c0_8586_b6f05c6b4576.slice/crio-f2994993550fae44295f9fd94be0c61996aee5eb56f138d1df993fe29d092d14 WatchSource:0}: Error finding container f2994993550fae44295f9fd94be0c61996aee5eb56f138d1df993fe29d092d14: Status 404 returned error can't find the container with id f2994993550fae44295f9fd94be0c61996aee5eb56f138d1df993fe29d092d14 Feb 16 17:50:34 crc kubenswrapper[4870]: I0216 17:50:34.617758 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jd9s2/crc-debug-ccg5z" event={"ID":"afd0b277-02c6-47c0-8586-b6f05c6b4576","Type":"ContainerStarted","Data":"f2994993550fae44295f9fd94be0c61996aee5eb56f138d1df993fe29d092d14"} Feb 16 17:50:35 crc kubenswrapper[4870]: I0216 17:50:35.632742 4870 generic.go:334] "Generic (PLEG): container finished" podID="afd0b277-02c6-47c0-8586-b6f05c6b4576" containerID="8028984b701b35b6b4aee53a041131693086121a14e6c6949fb201ef99e52ce8" exitCode=1 Feb 16 17:50:35 crc kubenswrapper[4870]: I0216 17:50:35.632802 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jd9s2/crc-debug-ccg5z" event={"ID":"afd0b277-02c6-47c0-8586-b6f05c6b4576","Type":"ContainerDied","Data":"8028984b701b35b6b4aee53a041131693086121a14e6c6949fb201ef99e52ce8"} Feb 16 17:50:35 crc kubenswrapper[4870]: I0216 17:50:35.744762 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jd9s2/crc-debug-ccg5z"] Feb 16 17:50:35 crc kubenswrapper[4870]: I0216 17:50:35.754098 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jd9s2/crc-debug-ccg5z"] Feb 16 17:50:36 crc kubenswrapper[4870]: I0216 17:50:36.775982 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jd9s2/crc-debug-ccg5z" Feb 16 17:50:36 crc kubenswrapper[4870]: I0216 17:50:36.890242 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbgpl\" (UniqueName: \"kubernetes.io/projected/afd0b277-02c6-47c0-8586-b6f05c6b4576-kube-api-access-mbgpl\") pod \"afd0b277-02c6-47c0-8586-b6f05c6b4576\" (UID: \"afd0b277-02c6-47c0-8586-b6f05c6b4576\") " Feb 16 17:50:36 crc kubenswrapper[4870]: I0216 17:50:36.890450 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/afd0b277-02c6-47c0-8586-b6f05c6b4576-host\") pod \"afd0b277-02c6-47c0-8586-b6f05c6b4576\" (UID: \"afd0b277-02c6-47c0-8586-b6f05c6b4576\") " Feb 16 17:50:36 crc kubenswrapper[4870]: I0216 17:50:36.891181 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afd0b277-02c6-47c0-8586-b6f05c6b4576-host" (OuterVolumeSpecName: "host") pod "afd0b277-02c6-47c0-8586-b6f05c6b4576" (UID: "afd0b277-02c6-47c0-8586-b6f05c6b4576"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:50:36 crc kubenswrapper[4870]: I0216 17:50:36.901322 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afd0b277-02c6-47c0-8586-b6f05c6b4576-kube-api-access-mbgpl" (OuterVolumeSpecName: "kube-api-access-mbgpl") pod "afd0b277-02c6-47c0-8586-b6f05c6b4576" (UID: "afd0b277-02c6-47c0-8586-b6f05c6b4576"). InnerVolumeSpecName "kube-api-access-mbgpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:50:36 crc kubenswrapper[4870]: I0216 17:50:36.993259 4870 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/afd0b277-02c6-47c0-8586-b6f05c6b4576-host\") on node \"crc\" DevicePath \"\"" Feb 16 17:50:36 crc kubenswrapper[4870]: I0216 17:50:36.993292 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbgpl\" (UniqueName: \"kubernetes.io/projected/afd0b277-02c6-47c0-8586-b6f05c6b4576-kube-api-access-mbgpl\") on node \"crc\" DevicePath \"\"" Feb 16 17:50:37 crc kubenswrapper[4870]: I0216 17:50:37.654514 4870 scope.go:117] "RemoveContainer" containerID="8028984b701b35b6b4aee53a041131693086121a14e6c6949fb201ef99e52ce8" Feb 16 17:50:37 crc kubenswrapper[4870]: I0216 17:50:37.654572 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jd9s2/crc-debug-ccg5z" Feb 16 17:50:38 crc kubenswrapper[4870]: I0216 17:50:38.235098 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afd0b277-02c6-47c0-8586-b6f05c6b4576" path="/var/lib/kubelet/pods/afd0b277-02c6-47c0-8586-b6f05c6b4576/volumes" Feb 16 17:50:40 crc kubenswrapper[4870]: E0216 17:50:40.225186 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:50:43 crc kubenswrapper[4870]: I0216 17:50:43.222876 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:50:43 crc kubenswrapper[4870]: E0216 17:50:43.224159 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:50:54 crc kubenswrapper[4870]: E0216 17:50:54.226292 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:50:57 crc kubenswrapper[4870]: I0216 17:50:57.223175 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:50:57 crc kubenswrapper[4870]: E0216 17:50:57.224077 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:51:09 crc kubenswrapper[4870]: E0216 17:51:09.225113 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:51:10 crc kubenswrapper[4870]: I0216 17:51:10.222916 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:51:10 crc kubenswrapper[4870]: E0216 17:51:10.223542 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:51:20 crc kubenswrapper[4870]: E0216 17:51:20.233487 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:51:23 crc kubenswrapper[4870]: I0216 17:51:23.223210 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:51:23 crc kubenswrapper[4870]: E0216 17:51:23.223985 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:51:28 crc kubenswrapper[4870]: I0216 17:51:28.012967 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_79e9de5e-117f-4d5e-bfee-bad481a8c0b8/init-config-reloader/0.log" Feb 16 17:51:28 crc kubenswrapper[4870]: I0216 17:51:28.136991 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_79e9de5e-117f-4d5e-bfee-bad481a8c0b8/init-config-reloader/0.log" Feb 16 17:51:28 crc kubenswrapper[4870]: I0216 17:51:28.153698 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_79e9de5e-117f-4d5e-bfee-bad481a8c0b8/alertmanager/0.log" Feb 16 17:51:28 crc kubenswrapper[4870]: I0216 17:51:28.196535 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_79e9de5e-117f-4d5e-bfee-bad481a8c0b8/config-reloader/0.log" Feb 16 17:51:28 crc kubenswrapper[4870]: I0216 17:51:28.343918 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-958875f6b-md5pd_d4da9035-cb64-4693-9364-66edc8e1cea6/barbican-api/0.log" Feb 16 17:51:28 crc kubenswrapper[4870]: I0216 17:51:28.364094 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-958875f6b-md5pd_d4da9035-cb64-4693-9364-66edc8e1cea6/barbican-api-log/0.log" Feb 16 17:51:28 crc kubenswrapper[4870]: I0216 17:51:28.483135 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-d9984d7fd-5x6fd_d01fcbdc-1303-44a6-95ff-cffdad0e2fa6/barbican-keystone-listener/0.log" Feb 16 17:51:28 crc kubenswrapper[4870]: I0216 17:51:28.551175 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-d9984d7fd-5x6fd_d01fcbdc-1303-44a6-95ff-cffdad0e2fa6/barbican-keystone-listener-log/0.log" Feb 16 17:51:28 crc kubenswrapper[4870]: I0216 17:51:28.657295 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5c84768f67-lv86b_55dc3430-223f-4944-9678-6a93b6d69499/barbican-worker/0.log" Feb 16 17:51:28 crc kubenswrapper[4870]: I0216 17:51:28.661622 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5c84768f67-lv86b_55dc3430-223f-4944-9678-6a93b6d69499/barbican-worker-log/0.log" Feb 16 17:51:28 crc kubenswrapper[4870]: I0216 17:51:28.804359 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e0dd084b-7f2e-42bf-b06d-71ffdfaa195a/ceilometer-central-agent/0.log" Feb 16 17:51:28 crc kubenswrapper[4870]: I0216 17:51:28.860524 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e0dd084b-7f2e-42bf-b06d-71ffdfaa195a/ceilometer-notification-agent/0.log" Feb 16 17:51:28 crc kubenswrapper[4870]: I0216 17:51:28.868261 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e0dd084b-7f2e-42bf-b06d-71ffdfaa195a/proxy-httpd/0.log" Feb 16 17:51:28 crc kubenswrapper[4870]: I0216 17:51:28.999339 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e0dd084b-7f2e-42bf-b06d-71ffdfaa195a/sg-core/0.log" Feb 16 17:51:29 crc kubenswrapper[4870]: I0216 17:51:29.284703 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_46039c25-14ee-4091-8b9c-8bddcd95d44f/cinder-api-log/0.log" Feb 16 17:51:29 crc kubenswrapper[4870]: I0216 17:51:29.305293 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_46039c25-14ee-4091-8b9c-8bddcd95d44f/cinder-api/0.log" Feb 16 17:51:29 crc kubenswrapper[4870]: I0216 17:51:29.375806 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1f1bdefa-44ab-4760-9fe6-fea5802dfde1/cinder-scheduler/0.log" Feb 16 17:51:29 crc kubenswrapper[4870]: I0216 17:51:29.524587 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1f1bdefa-44ab-4760-9fe6-fea5802dfde1/probe/0.log" Feb 16 17:51:29 crc kubenswrapper[4870]: I0216 17:51:29.716936 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_ca54e9a0-3f4f-4f0b-96cb-56ecd8015d24/loki-compactor/0.log" Feb 16 17:51:29 crc kubenswrapper[4870]: I0216 17:51:29.791350 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-585d9bcbc-547gr_69d435d2-948e-44d4-b0c2-8e1db0efb383/loki-distributor/0.log" Feb 16 17:51:29 crc kubenswrapper[4870]: I0216 17:51:29.895320 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-gdknd_56c2e555-c8e4-4391-bec8-9b98ed7a830b/gateway/0.log" Feb 16 17:51:29 crc kubenswrapper[4870]: I0216 17:51:29.988213 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-nhw2f_3b99d5cf-946f-4e7f-980d-1e6bf6aec95e/gateway/0.log" Feb 16 17:51:30 crc kubenswrapper[4870]: I0216 17:51:30.074462 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_c7e13f68-6de2-4cf5-b655-77e0c2141ea1/loki-index-gateway/0.log" Feb 16 17:51:30 crc kubenswrapper[4870]: I0216 17:51:30.184684 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_d158e8d5-206e-4289-a1e5-247fddf29a11/loki-ingester/0.log" Feb 16 17:51:30 crc kubenswrapper[4870]: I0216 17:51:30.311586 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-58c84b5844-bgr8z_e6a8818a-de8b-4bbd-b7a8-5d68251b8b3d/loki-querier/0.log" Feb 16 17:51:30 crc kubenswrapper[4870]: I0216 17:51:30.394387 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-67bb4dfcd8-6hldn_3e04ba57-8554-4553-a62f-8b6787ba96dd/loki-query-frontend/0.log" Feb 16 17:51:30 crc kubenswrapper[4870]: I0216 17:51:30.515294 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-89c5cd4d5-n8bg4_babf44d3-8b05-43fa-8c73-bb2ade1d08dd/init/0.log" Feb 16 17:51:30 crc kubenswrapper[4870]: I0216 17:51:30.719292 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59/glance-httpd/0.log" Feb 16 17:51:30 crc kubenswrapper[4870]: I0216 17:51:30.742674 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-89c5cd4d5-n8bg4_babf44d3-8b05-43fa-8c73-bb2ade1d08dd/init/0.log" Feb 16 17:51:30 crc kubenswrapper[4870]: I0216 17:51:30.756803 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-89c5cd4d5-n8bg4_babf44d3-8b05-43fa-8c73-bb2ade1d08dd/dnsmasq-dns/0.log" Feb 16 17:51:30 crc kubenswrapper[4870]: I0216 17:51:30.935925 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ecb1dc5e-74e4-4f4b-a8a9-56d8bd55bc59/glance-log/0.log" Feb 16 17:51:30 crc kubenswrapper[4870]: I0216 17:51:30.979468 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_66aea147-403d-4f20-837e-2a492e54cb60/glance-httpd/0.log" Feb 16 17:51:31 crc kubenswrapper[4870]: I0216 17:51:31.012415 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_66aea147-403d-4f20-837e-2a492e54cb60/glance-log/0.log" Feb 16 17:51:31 crc kubenswrapper[4870]: I0216 17:51:31.201765 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6df8d5ad-619e-4953-9e10-ac1c43c20e3e/kube-state-metrics/0.log" Feb 16 17:51:31 crc kubenswrapper[4870]: I0216 17:51:31.261136 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-568fd566f-ltx6b_25726b72-a54a-4482-8440-671195187a49/keystone-api/0.log" Feb 16 17:51:31 crc kubenswrapper[4870]: I0216 17:51:31.558455 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-67bf48b897-78ftj_3ec0f9b7-8e31-4b80-bb3b-5245632bc524/neutron-api/0.log" Feb 16 17:51:31 crc kubenswrapper[4870]: I0216 17:51:31.666988 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-67bf48b897-78ftj_3ec0f9b7-8e31-4b80-bb3b-5245632bc524/neutron-httpd/0.log" Feb 16 17:51:32 crc kubenswrapper[4870]: I0216 17:51:32.094518 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2dd2dad7-e696-4e73-91bf-572ee65c541a/nova-api-api/0.log" Feb 16 17:51:32 crc kubenswrapper[4870]: I0216 17:51:32.123229 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2dd2dad7-e696-4e73-91bf-572ee65c541a/nova-api-log/0.log" Feb 16 17:51:32 crc kubenswrapper[4870]: I0216 17:51:32.368935 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_7e0d8f9c-1ee5-40e6-9ccc-8dd626f4f720/nova-cell0-conductor-conductor/0.log" Feb 16 17:51:32 crc kubenswrapper[4870]: I0216 17:51:32.474195 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_11d31404-683b-4ac8-9d85-7b5425843395/nova-cell1-conductor-conductor/0.log" Feb 16 17:51:32 crc kubenswrapper[4870]: I0216 17:51:32.624396 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_9d8c610c-2125-423f-a856-03f0aeebc8fc/nova-cell1-novncproxy-novncproxy/0.log" Feb 16 17:51:32 crc kubenswrapper[4870]: I0216 17:51:32.791234 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_328df347-b011-47c4-912c-a4eb850c9146/nova-metadata-log/0.log" Feb 16 17:51:33 crc kubenswrapper[4870]: I0216 17:51:33.016193 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ae3a5425-a813-4cb1-8b27-c19fb83c7fbc/nova-scheduler-scheduler/0.log" Feb 16 17:51:33 crc kubenswrapper[4870]: I0216 17:51:33.078111 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1c107984-3d0e-4627-98a9-0830571e42fa/mysql-bootstrap/0.log" Feb 16 17:51:33 crc kubenswrapper[4870]: I0216 17:51:33.306926 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1c107984-3d0e-4627-98a9-0830571e42fa/galera/0.log" Feb 16 17:51:33 crc kubenswrapper[4870]: I0216 17:51:33.322136 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1c107984-3d0e-4627-98a9-0830571e42fa/mysql-bootstrap/0.log" Feb 16 17:51:33 crc kubenswrapper[4870]: I0216 17:51:33.505686 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a6723230-3e6b-43cc-bda7-2aac2faa0e67/mysql-bootstrap/0.log" Feb 16 17:51:33 crc kubenswrapper[4870]: I0216 17:51:33.746278 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a6723230-3e6b-43cc-bda7-2aac2faa0e67/mysql-bootstrap/0.log" Feb 16 17:51:33 crc kubenswrapper[4870]: I0216 17:51:33.751072 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_328df347-b011-47c4-912c-a4eb850c9146/nova-metadata-metadata/0.log" Feb 16 17:51:33 crc kubenswrapper[4870]: I0216 17:51:33.777821 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a6723230-3e6b-43cc-bda7-2aac2faa0e67/galera/0.log" Feb 16 17:51:33 crc kubenswrapper[4870]: I0216 17:51:33.940002 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_e5ffb6c2-c33b-4118-985e-52a0e14ba938/openstackclient/0.log" Feb 16 17:51:34 crc kubenswrapper[4870]: I0216 17:51:34.023849 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ktsg2_2f4b2faa-7ab7-40c8-a28f-d93749011dbe/ovn-controller/0.log" Feb 16 17:51:34 crc kubenswrapper[4870]: I0216 17:51:34.181148 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-b8jsv_a6713b2c-65e7-42c3-8cdc-4ef240f57ee1/openstack-network-exporter/0.log" Feb 16 17:51:34 crc kubenswrapper[4870]: I0216 17:51:34.269532 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rh6tb_34012c0b-1886-446c-983e-6a1351630186/ovsdb-server-init/0.log" Feb 16 17:51:34 crc kubenswrapper[4870]: I0216 17:51:34.444860 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rh6tb_34012c0b-1886-446c-983e-6a1351630186/ovs-vswitchd/0.log" Feb 16 17:51:34 crc kubenswrapper[4870]: I0216 17:51:34.503828 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rh6tb_34012c0b-1886-446c-983e-6a1351630186/ovsdb-server-init/0.log" Feb 16 17:51:34 crc kubenswrapper[4870]: I0216 17:51:34.520744 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rh6tb_34012c0b-1886-446c-983e-6a1351630186/ovsdb-server/0.log" Feb 16 17:51:34 crc kubenswrapper[4870]: I0216 17:51:34.708653 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9b2d9aae-f384-4c40-adfb-35224530b735/openstack-network-exporter/0.log" Feb 16 17:51:34 crc kubenswrapper[4870]: I0216 17:51:34.713523 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9b2d9aae-f384-4c40-adfb-35224530b735/ovn-northd/0.log" Feb 16 17:51:34 crc kubenswrapper[4870]: I0216 17:51:34.804887 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d/openstack-network-exporter/0.log" Feb 16 17:51:34 crc kubenswrapper[4870]: I0216 17:51:34.921538 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6be0bc8f-cbbf-4f5e-98c4-4b46ffa0041d/ovsdbserver-nb/0.log" Feb 16 17:51:35 crc kubenswrapper[4870]: I0216 17:51:35.018445 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe/openstack-network-exporter/0.log" Feb 16 17:51:35 crc kubenswrapper[4870]: I0216 17:51:35.088752 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c4e1ed4d-c5c8-4497-960a-0035c3fc3fbe/ovsdbserver-sb/0.log" Feb 16 17:51:35 crc kubenswrapper[4870]: I0216 17:51:35.204000 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-756f867d68-hgndg_1ebe7703-7d1a-47d0-b3b2-8965365beb56/placement-api/0.log" Feb 16 17:51:35 crc kubenswrapper[4870]: I0216 17:51:35.227846 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:51:35 crc kubenswrapper[4870]: I0216 17:51:35.313149 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-756f867d68-hgndg_1ebe7703-7d1a-47d0-b3b2-8965365beb56/placement-log/0.log" Feb 16 17:51:35 crc kubenswrapper[4870]: E0216 17:51:35.366106 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:51:35 crc kubenswrapper[4870]: E0216 17:51:35.366170 4870 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:51:35 crc kubenswrapper[4870]: E0216 17:51:35.366320 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7b2p7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6hkdm_openstack(34a86750-1fff-4add-8462-7ab805ec7f89): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:51:35 crc kubenswrapper[4870]: E0216 17:51:35.367408 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:51:35 crc kubenswrapper[4870]: I0216 17:51:35.375034 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_39dde5ae-2522-43c8-a0e0-9e257052bab6/init-config-reloader/0.log" Feb 16 17:51:35 crc kubenswrapper[4870]: I0216 17:51:35.564591 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_39dde5ae-2522-43c8-a0e0-9e257052bab6/init-config-reloader/0.log" Feb 16 17:51:35 crc kubenswrapper[4870]: I0216 17:51:35.581932 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_39dde5ae-2522-43c8-a0e0-9e257052bab6/prometheus/0.log" Feb 16 17:51:35 crc kubenswrapper[4870]: I0216 17:51:35.593806 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_39dde5ae-2522-43c8-a0e0-9e257052bab6/config-reloader/0.log" Feb 16 17:51:35 crc kubenswrapper[4870]: I0216 17:51:35.622738 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_39dde5ae-2522-43c8-a0e0-9e257052bab6/thanos-sidecar/0.log" Feb 16 17:51:35 crc kubenswrapper[4870]: I0216 17:51:35.759002 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_66aba020-76f1-4cf7-992b-0745bd3c3512/setup-container/0.log" Feb 16 17:51:36 crc kubenswrapper[4870]: I0216 17:51:36.072716 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_66aba020-76f1-4cf7-992b-0745bd3c3512/rabbitmq/0.log" Feb 16 17:51:36 crc kubenswrapper[4870]: I0216 17:51:36.084618 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d027dcfc-cbb1-4c78-b55f-0ed148b1faad/setup-container/0.log" Feb 16 17:51:36 crc kubenswrapper[4870]: I0216 17:51:36.103504 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_66aba020-76f1-4cf7-992b-0745bd3c3512/setup-container/0.log" Feb 16 17:51:36 crc kubenswrapper[4870]: I0216 17:51:36.230242 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:51:36 crc kubenswrapper[4870]: E0216 17:51:36.230544 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:51:36 crc kubenswrapper[4870]: I0216 17:51:36.310174 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d027dcfc-cbb1-4c78-b55f-0ed148b1faad/rabbitmq/0.log" Feb 16 17:51:36 crc kubenswrapper[4870]: I0216 17:51:36.353683 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d027dcfc-cbb1-4c78-b55f-0ed148b1faad/setup-container/0.log" Feb 16 17:51:36 crc kubenswrapper[4870]: I0216 17:51:36.469811 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8f5dc8565-bnkj8_cc25f232-f484-409d-ac24-fc126dc679d4/proxy-httpd/0.log" Feb 16 17:51:36 crc kubenswrapper[4870]: I0216 17:51:36.517192 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8f5dc8565-bnkj8_cc25f232-f484-409d-ac24-fc126dc679d4/proxy-server/0.log" Feb 16 17:51:36 crc kubenswrapper[4870]: I0216 17:51:36.579477 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-gnnq2_bdf9770f-4fe7-4b42-9968-4fc4461ef6aa/swift-ring-rebalance/0.log" Feb 16 17:51:36 crc kubenswrapper[4870]: I0216 17:51:36.733788 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_669a24d2-3e17-4ce1-aba2-c45d2a92683a/account-auditor/0.log" Feb 16 17:51:36 crc kubenswrapper[4870]: I0216 17:51:36.806582 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_669a24d2-3e17-4ce1-aba2-c45d2a92683a/account-replicator/0.log" Feb 16 17:51:36 crc kubenswrapper[4870]: I0216 17:51:36.822104 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_669a24d2-3e17-4ce1-aba2-c45d2a92683a/account-reaper/0.log" Feb 16 17:51:36 crc kubenswrapper[4870]: I0216 17:51:36.916281 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_669a24d2-3e17-4ce1-aba2-c45d2a92683a/account-server/0.log" Feb 16 17:51:36 crc kubenswrapper[4870]: I0216 17:51:36.949705 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_669a24d2-3e17-4ce1-aba2-c45d2a92683a/container-auditor/0.log" Feb 16 17:51:37 crc kubenswrapper[4870]: I0216 17:51:37.049199 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_669a24d2-3e17-4ce1-aba2-c45d2a92683a/container-server/0.log" Feb 16 17:51:37 crc kubenswrapper[4870]: I0216 17:51:37.063589 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_669a24d2-3e17-4ce1-aba2-c45d2a92683a/container-replicator/0.log" Feb 16 17:51:37 crc kubenswrapper[4870]: I0216 17:51:37.155641 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_669a24d2-3e17-4ce1-aba2-c45d2a92683a/object-auditor/0.log" Feb 16 17:51:37 crc kubenswrapper[4870]: I0216 17:51:37.176134 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_669a24d2-3e17-4ce1-aba2-c45d2a92683a/container-updater/0.log" Feb 16 17:51:37 crc kubenswrapper[4870]: I0216 17:51:37.244918 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_669a24d2-3e17-4ce1-aba2-c45d2a92683a/object-expirer/0.log" Feb 16 17:51:37 crc kubenswrapper[4870]: I0216 17:51:37.317497 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_669a24d2-3e17-4ce1-aba2-c45d2a92683a/object-replicator/0.log" Feb 16 17:51:37 crc kubenswrapper[4870]: I0216 17:51:37.338876 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_669a24d2-3e17-4ce1-aba2-c45d2a92683a/object-server/0.log" Feb 16 17:51:37 crc kubenswrapper[4870]: I0216 17:51:37.379826 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_669a24d2-3e17-4ce1-aba2-c45d2a92683a/object-updater/0.log" Feb 16 17:51:37 crc kubenswrapper[4870]: I0216 17:51:37.438053 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_669a24d2-3e17-4ce1-aba2-c45d2a92683a/rsync/0.log" Feb 16 17:51:37 crc kubenswrapper[4870]: I0216 17:51:37.505694 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_669a24d2-3e17-4ce1-aba2-c45d2a92683a/swift-recon-cron/0.log" Feb 16 17:51:40 crc kubenswrapper[4870]: I0216 17:51:40.928062 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_858288c8-7418-43d3-ae1c-7974c170239d/memcached/0.log" Feb 16 17:51:49 crc kubenswrapper[4870]: E0216 17:51:49.225377 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:51:51 crc kubenswrapper[4870]: I0216 17:51:51.223693 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:51:51 crc kubenswrapper[4870]: E0216 17:51:51.224255 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:51:59 crc kubenswrapper[4870]: I0216 17:51:59.909827 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw_60cefeeb-704a-4af2-9df4-f497a9d77e64/util/0.log" Feb 16 17:52:00 crc kubenswrapper[4870]: I0216 17:52:00.129901 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw_60cefeeb-704a-4af2-9df4-f497a9d77e64/pull/0.log" Feb 16 17:52:00 crc kubenswrapper[4870]: I0216 17:52:00.135003 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw_60cefeeb-704a-4af2-9df4-f497a9d77e64/pull/0.log" Feb 16 17:52:00 crc kubenswrapper[4870]: I0216 17:52:00.147807 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw_60cefeeb-704a-4af2-9df4-f497a9d77e64/util/0.log" Feb 16 17:52:00 crc kubenswrapper[4870]: I0216 17:52:00.287791 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw_60cefeeb-704a-4af2-9df4-f497a9d77e64/util/0.log" Feb 16 17:52:00 crc kubenswrapper[4870]: I0216 17:52:00.297822 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw_60cefeeb-704a-4af2-9df4-f497a9d77e64/pull/0.log" Feb 16 17:52:00 crc kubenswrapper[4870]: I0216 17:52:00.320996 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e98tghdw_60cefeeb-704a-4af2-9df4-f497a9d77e64/extract/0.log" Feb 16 17:52:00 crc kubenswrapper[4870]: I0216 17:52:00.724840 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-v7gn2_0d52da13-82bf-439f-ac03-bbf3f539de78/manager/0.log" Feb 16 17:52:01 crc kubenswrapper[4870]: I0216 17:52:01.070009 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-t9hz5_aef27f45-8abf-44f3-a290-c6d49dbfa1fd/manager/0.log" Feb 16 17:52:01 crc kubenswrapper[4870]: I0216 17:52:01.334361 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-gcgvr_6d98461a-872e-4857-a100-f905a5231b83/manager/0.log" Feb 16 17:52:01 crc kubenswrapper[4870]: I0216 17:52:01.513317 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-hqjtw_bb375b58-f7fa-4006-b087-cb06ea0cfc86/manager/0.log" Feb 16 17:52:02 crc kubenswrapper[4870]: I0216 17:52:02.031428 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-68kws_a104184d-e08b-46ef-8595-6b21f2826f9a/manager/0.log" Feb 16 17:52:02 crc kubenswrapper[4870]: I0216 17:52:02.049903 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-ns7zz_2c0e615e-3bf7-4627-b800-af60affed5f5/manager/0.log" Feb 16 17:52:02 crc kubenswrapper[4870]: I0216 17:52:02.072258 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-4cm6m_5d8420fa-9cbd-47f7-a252-a187de8515cd/manager/0.log" Feb 16 17:52:02 crc kubenswrapper[4870]: I0216 17:52:02.374023 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-b9czx_dd026e8b-7f8a-4c07-9bee-84e0fe1e535f/manager/0.log" Feb 16 17:52:02 crc kubenswrapper[4870]: I0216 17:52:02.498010 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-k468x_65381e3b-70d2-4dbf-a1e5-279696c5cc09/manager/0.log" Feb 16 17:52:02 crc kubenswrapper[4870]: I0216 17:52:02.631393 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-58msl_7a88f711-61b7-44b5-a82e-5c909efc50e9/manager/0.log" Feb 16 17:52:02 crc kubenswrapper[4870]: I0216 17:52:02.845808 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-nx2h7_a8be2317-5b27-4e2b-b403-66d75647fda1/manager/0.log" Feb 16 17:52:03 crc kubenswrapper[4870]: I0216 17:52:03.196835 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-wzn2r_3e3dfdb0-abc8-458b-811a-752f6bd9430e/manager/0.log" Feb 16 17:52:03 crc kubenswrapper[4870]: E0216 17:52:03.238515 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:52:03 crc kubenswrapper[4870]: I0216 17:52:03.460490 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9crzhd5_5f7c2918-26fd-46fb-bae6-52fbdd3eded7/manager/0.log" Feb 16 17:52:03 crc kubenswrapper[4870]: I0216 17:52:03.829035 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6f655b9d6d-rjqbs_c8592842-a07d-45a3-a74c-f322156994b2/operator/0.log" Feb 16 17:52:04 crc kubenswrapper[4870]: I0216 17:52:04.104312 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8v8hn_8bc858a0-aec4-4366-85ea-d046f8d8464e/registry-server/0.log" Feb 16 17:52:04 crc kubenswrapper[4870]: I0216 17:52:04.223031 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:52:04 crc kubenswrapper[4870]: E0216 17:52:04.223446 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:52:04 crc kubenswrapper[4870]: I0216 17:52:04.378035 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-cnmml_98cc8d06-5e5f-406b-9212-0053c9c66238/manager/0.log" Feb 16 17:52:04 crc kubenswrapper[4870]: I0216 17:52:04.633687 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-d6pgp_5d2f7cba-dd0c-43f0-9b4b-a2097c4a457a/manager/0.log" Feb 16 17:52:04 crc kubenswrapper[4870]: I0216 17:52:04.813546 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-bfq8r_256aea85-8852-4a34-98ef-3c9e07b30453/manager/0.log" Feb 16 17:52:04 crc kubenswrapper[4870]: I0216 17:52:04.889932 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-k4rjv_2a0ef8b9-a1a9-4014-b5dd-b7356c6b411c/operator/0.log" Feb 16 17:52:05 crc kubenswrapper[4870]: I0216 17:52:05.095086 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-wr4hc_21ffa52a-74b8-444f-a1b6-95dfb4096974/manager/0.log" Feb 16 17:52:05 crc kubenswrapper[4870]: I0216 17:52:05.199538 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6f58b764dd-f84j7_44780e56-bccf-4440-b6c6-0333808b2e02/manager/0.log" Feb 16 17:52:05 crc kubenswrapper[4870]: I0216 17:52:05.325814 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-lvqfx_e7030097-1d81-471d-8731-13a271f38050/manager/0.log" Feb 16 17:52:05 crc kubenswrapper[4870]: I0216 17:52:05.493544 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-8cf9q_feb3d8e0-ace7-4aa3-9621-e56d57e7b510/manager/0.log" Feb 16 17:52:05 crc kubenswrapper[4870]: I0216 17:52:05.739890 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5884f785c-hjhdz_3abc2c2d-aaa8-42a3-876f-1107127dab28/manager/0.log" Feb 16 17:52:08 crc kubenswrapper[4870]: I0216 17:52:08.065393 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-cchkt_e7d3d5ca-7088-46e0-88eb-bc8f1270b85d/manager/0.log" Feb 16 17:52:15 crc kubenswrapper[4870]: I0216 17:52:15.223108 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:52:15 crc kubenswrapper[4870]: E0216 17:52:15.223759 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:52:17 crc kubenswrapper[4870]: E0216 17:52:17.225006 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:52:24 crc kubenswrapper[4870]: I0216 17:52:24.043533 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-bn28c_fa1008c7-de78-4cc4-93d1-b6b22198a05a/control-plane-machine-set-operator/0.log" Feb 16 17:52:24 crc kubenswrapper[4870]: I0216 17:52:24.237782 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-br5s9_68b31442-b3cc-486b-8fd1-e968978c9f1c/kube-rbac-proxy/0.log" Feb 16 17:52:24 crc kubenswrapper[4870]: I0216 17:52:24.290614 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-br5s9_68b31442-b3cc-486b-8fd1-e968978c9f1c/machine-api-operator/0.log" Feb 16 17:52:26 crc kubenswrapper[4870]: I0216 17:52:26.229240 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:52:26 crc kubenswrapper[4870]: E0216 17:52:26.229930 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:52:28 crc kubenswrapper[4870]: E0216 17:52:28.225151 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:52:36 crc kubenswrapper[4870]: I0216 17:52:36.458483 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-jmgm8_37b05018-476e-4fc8-9f96-2d6ed226a0fa/cert-manager-controller/0.log" Feb 16 17:52:36 crc kubenswrapper[4870]: I0216 17:52:36.630752 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-7dmcz_762fd866-f1ec-485f-a6c8-5230b0806f2c/cert-manager-cainjector/0.log" Feb 16 17:52:36 crc kubenswrapper[4870]: I0216 17:52:36.705825 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-n9xgs_541762a5-2ac3-47b8-84a7-4bab2757e90a/cert-manager-webhook/0.log" Feb 16 17:52:37 crc kubenswrapper[4870]: I0216 17:52:37.222833 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:52:37 crc kubenswrapper[4870]: E0216 17:52:37.223487 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:52:40 crc kubenswrapper[4870]: E0216 17:52:40.224850 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:52:48 crc kubenswrapper[4870]: I0216 17:52:48.837996 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-bhvfd_8d8293d9-58e9-4f2f-a81e-efbcdfe14d27/nmstate-console-plugin/0.log" Feb 16 17:52:48 crc kubenswrapper[4870]: I0216 17:52:48.998866 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5xr9s_16269c6b-867e-4277-91a2-52456a4424f2/nmstate-handler/0.log" Feb 16 17:52:49 crc kubenswrapper[4870]: I0216 17:52:49.098953 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-klh69_cb34d88d-4dbf-4253-8537-dda975a9985a/kube-rbac-proxy/0.log" Feb 16 17:52:49 crc kubenswrapper[4870]: I0216 17:52:49.146685 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-klh69_cb34d88d-4dbf-4253-8537-dda975a9985a/nmstate-metrics/0.log" Feb 16 17:52:49 crc kubenswrapper[4870]: I0216 17:52:49.252934 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-6vj2z_bd8f5c2a-410f-40e4-9672-272f95aacea1/nmstate-operator/0.log" Feb 16 17:52:49 crc kubenswrapper[4870]: I0216 17:52:49.382643 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-s4fff_699cb420-5e4c-42ea-841c-b368459f6a2e/nmstate-webhook/0.log" Feb 16 17:52:50 crc kubenswrapper[4870]: I0216 17:52:50.222874 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:52:50 crc kubenswrapper[4870]: E0216 17:52:50.228300 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:52:54 crc kubenswrapper[4870]: E0216 17:52:54.224195 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:53:01 crc kubenswrapper[4870]: I0216 17:53:01.222508 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:53:01 crc kubenswrapper[4870]: E0216 17:53:01.223201 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:53:01 crc kubenswrapper[4870]: I0216 17:53:01.557870 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-797c678dc4-pv4ll_79183cb8-6455-4b66-9732-d3eb9604ab48/kube-rbac-proxy/0.log" Feb 16 17:53:01 crc kubenswrapper[4870]: I0216 17:53:01.641274 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-797c678dc4-pv4ll_79183cb8-6455-4b66-9732-d3eb9604ab48/manager/0.log" Feb 16 17:53:05 crc kubenswrapper[4870]: E0216 17:53:05.225328 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:53:13 crc kubenswrapper[4870]: I0216 17:53:13.670850 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-v797k_d26622cb-47bc-4378-867b-abad855869a5/prometheus-operator/0.log" Feb 16 17:53:13 crc kubenswrapper[4870]: I0216 17:53:13.843094 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x_10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a/prometheus-operator-admission-webhook/0.log" Feb 16 17:53:13 crc kubenswrapper[4870]: I0216 17:53:13.866505 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs_063bbc92-30f0-4cb3-9f15-8f303b2fe4d0/prometheus-operator-admission-webhook/0.log" Feb 16 17:53:14 crc kubenswrapper[4870]: I0216 17:53:14.057063 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7mxf6_7726c8d8-365d-4b95-9b6c-2c95c221f1f4/operator/0.log" Feb 16 17:53:14 crc kubenswrapper[4870]: I0216 17:53:14.087317 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-pbwnb_02ecd319-55c1-4189-bf38-35f08025630c/perses-operator/0.log" Feb 16 17:53:16 crc kubenswrapper[4870]: I0216 17:53:16.229725 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:53:16 crc kubenswrapper[4870]: E0216 17:53:16.234752 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:53:16 crc kubenswrapper[4870]: I0216 17:53:16.453031 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerStarted","Data":"fad609847562591995d01b404761ef2f912723b7c6df5a3a9c2782bc74728b1f"} Feb 16 17:53:26 crc kubenswrapper[4870]: I0216 17:53:26.915911 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-2qwb2_6b8e45f6-6719-4e08-832f-fb4074dc21b7/kube-rbac-proxy/0.log" Feb 16 17:53:26 crc kubenswrapper[4870]: I0216 17:53:26.991139 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-2qwb2_6b8e45f6-6719-4e08-832f-fb4074dc21b7/controller/0.log" Feb 16 17:53:27 crc kubenswrapper[4870]: I0216 17:53:27.099662 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qg9jr_d51255e7-059d-4c69-9a01-f90249fe53bf/cp-frr-files/0.log" Feb 16 17:53:27 crc kubenswrapper[4870]: I0216 17:53:27.244112 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qg9jr_d51255e7-059d-4c69-9a01-f90249fe53bf/cp-reloader/0.log" Feb 16 17:53:27 crc kubenswrapper[4870]: I0216 17:53:27.251587 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qg9jr_d51255e7-059d-4c69-9a01-f90249fe53bf/cp-metrics/0.log" Feb 16 17:53:27 crc kubenswrapper[4870]: I0216 17:53:27.258437 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qg9jr_d51255e7-059d-4c69-9a01-f90249fe53bf/cp-frr-files/0.log" Feb 16 17:53:27 crc kubenswrapper[4870]: I0216 17:53:27.312984 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qg9jr_d51255e7-059d-4c69-9a01-f90249fe53bf/cp-reloader/0.log" Feb 16 17:53:27 crc kubenswrapper[4870]: I0216 17:53:27.456588 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qg9jr_d51255e7-059d-4c69-9a01-f90249fe53bf/cp-frr-files/0.log" Feb 16 17:53:27 crc kubenswrapper[4870]: I0216 17:53:27.457736 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qg9jr_d51255e7-059d-4c69-9a01-f90249fe53bf/cp-metrics/0.log" Feb 16 17:53:27 crc kubenswrapper[4870]: I0216 17:53:27.496589 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qg9jr_d51255e7-059d-4c69-9a01-f90249fe53bf/cp-reloader/0.log" Feb 16 17:53:27 crc kubenswrapper[4870]: I0216 17:53:27.505708 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qg9jr_d51255e7-059d-4c69-9a01-f90249fe53bf/cp-metrics/0.log" Feb 16 17:53:27 crc kubenswrapper[4870]: I0216 17:53:27.676188 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qg9jr_d51255e7-059d-4c69-9a01-f90249fe53bf/cp-metrics/0.log" Feb 16 17:53:27 crc kubenswrapper[4870]: I0216 17:53:27.680428 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qg9jr_d51255e7-059d-4c69-9a01-f90249fe53bf/cp-reloader/0.log" Feb 16 17:53:27 crc kubenswrapper[4870]: I0216 17:53:27.681992 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qg9jr_d51255e7-059d-4c69-9a01-f90249fe53bf/cp-frr-files/0.log" Feb 16 17:53:27 crc kubenswrapper[4870]: I0216 17:53:27.708835 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qg9jr_d51255e7-059d-4c69-9a01-f90249fe53bf/controller/0.log" Feb 16 17:53:27 crc kubenswrapper[4870]: I0216 17:53:27.863044 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qg9jr_d51255e7-059d-4c69-9a01-f90249fe53bf/frr-metrics/0.log" Feb 16 17:53:27 crc kubenswrapper[4870]: I0216 17:53:27.898530 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qg9jr_d51255e7-059d-4c69-9a01-f90249fe53bf/kube-rbac-proxy/0.log" Feb 16 17:53:27 crc kubenswrapper[4870]: I0216 17:53:27.929598 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qg9jr_d51255e7-059d-4c69-9a01-f90249fe53bf/kube-rbac-proxy-frr/0.log" Feb 16 17:53:28 crc kubenswrapper[4870]: I0216 17:53:28.097558 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qg9jr_d51255e7-059d-4c69-9a01-f90249fe53bf/reloader/0.log" Feb 16 17:53:28 crc kubenswrapper[4870]: I0216 17:53:28.152206 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-sl5w2_601db539-ff99-4286-b333-89a76d744d27/frr-k8s-webhook-server/0.log" Feb 16 17:53:28 crc kubenswrapper[4870]: I0216 17:53:28.422735 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-dfd88577c-t9fpt_8005c88f-1465-4e77-bcd5-b58fe22b8055/manager/0.log" Feb 16 17:53:28 crc kubenswrapper[4870]: I0216 17:53:28.575521 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6b886dc755-4pxj2_c7148109-37fd-4199-9cca-df3f97d2d070/webhook-server/0.log" Feb 16 17:53:28 crc kubenswrapper[4870]: I0216 17:53:28.676559 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fh77x_d1cdf458-f970-4341-b2e9-f0752bf88a9c/kube-rbac-proxy/0.log" Feb 16 17:53:29 crc kubenswrapper[4870]: I0216 17:53:29.069438 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qg9jr_d51255e7-059d-4c69-9a01-f90249fe53bf/frr/0.log" Feb 16 17:53:29 crc kubenswrapper[4870]: I0216 17:53:29.373702 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fh77x_d1cdf458-f970-4341-b2e9-f0752bf88a9c/speaker/0.log" Feb 16 17:53:30 crc kubenswrapper[4870]: E0216 17:53:30.226012 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:53:42 crc kubenswrapper[4870]: I0216 17:53:42.217499 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm_56929b47-8ff7-4aed-83a9-781ca5cf1c4a/util/0.log" Feb 16 17:53:42 crc kubenswrapper[4870]: I0216 17:53:42.390231 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm_56929b47-8ff7-4aed-83a9-781ca5cf1c4a/util/0.log" Feb 16 17:53:42 crc kubenswrapper[4870]: I0216 17:53:42.458787 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm_56929b47-8ff7-4aed-83a9-781ca5cf1c4a/pull/0.log" Feb 16 17:53:42 crc kubenswrapper[4870]: I0216 17:53:42.503133 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm_56929b47-8ff7-4aed-83a9-781ca5cf1c4a/pull/0.log" Feb 16 17:53:42 crc kubenswrapper[4870]: I0216 17:53:42.646500 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm_56929b47-8ff7-4aed-83a9-781ca5cf1c4a/pull/0.log" Feb 16 17:53:42 crc kubenswrapper[4870]: I0216 17:53:42.649984 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm_56929b47-8ff7-4aed-83a9-781ca5cf1c4a/util/0.log" Feb 16 17:53:42 crc kubenswrapper[4870]: I0216 17:53:42.662291 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l2krm_56929b47-8ff7-4aed-83a9-781ca5cf1c4a/extract/0.log" Feb 16 17:53:42 crc kubenswrapper[4870]: I0216 17:53:42.841727 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r_ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb/util/0.log" Feb 16 17:53:43 crc kubenswrapper[4870]: I0216 17:53:43.011168 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r_ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb/util/0.log" Feb 16 17:53:43 crc kubenswrapper[4870]: I0216 17:53:43.028504 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r_ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb/pull/0.log" Feb 16 17:53:43 crc kubenswrapper[4870]: I0216 17:53:43.028662 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r_ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb/pull/0.log" Feb 16 17:53:43 crc kubenswrapper[4870]: I0216 17:53:43.200552 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r_ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb/pull/0.log" Feb 16 17:53:43 crc kubenswrapper[4870]: I0216 17:53:43.203257 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r_ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb/util/0.log" Feb 16 17:53:43 crc kubenswrapper[4870]: I0216 17:53:43.272035 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08r5j2r_ea424f45-0c9e-4c87-9ed7-67f3c3f31fcb/extract/0.log" Feb 16 17:53:43 crc kubenswrapper[4870]: I0216 17:53:43.400296 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl_60291662-7eb9-46bf-afbc-e75937b19398/util/0.log" Feb 16 17:53:43 crc kubenswrapper[4870]: I0216 17:53:43.642871 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl_60291662-7eb9-46bf-afbc-e75937b19398/util/0.log" Feb 16 17:53:43 crc kubenswrapper[4870]: I0216 17:53:43.644421 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl_60291662-7eb9-46bf-afbc-e75937b19398/pull/0.log" Feb 16 17:53:43 crc kubenswrapper[4870]: I0216 17:53:43.665522 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl_60291662-7eb9-46bf-afbc-e75937b19398/pull/0.log" Feb 16 17:53:43 crc kubenswrapper[4870]: I0216 17:53:43.793262 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl_60291662-7eb9-46bf-afbc-e75937b19398/util/0.log" Feb 16 17:53:43 crc kubenswrapper[4870]: I0216 17:53:43.840089 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl_60291662-7eb9-46bf-afbc-e75937b19398/extract/0.log" Feb 16 17:53:43 crc kubenswrapper[4870]: I0216 17:53:43.850440 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132r9jl_60291662-7eb9-46bf-afbc-e75937b19398/pull/0.log" Feb 16 17:53:44 crc kubenswrapper[4870]: I0216 17:53:44.018588 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d7kzc_01e5d5b9-01ec-4e85-910d-26a8ca382930/extract-utilities/0.log" Feb 16 17:53:44 crc kubenswrapper[4870]: I0216 17:53:44.201650 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d7kzc_01e5d5b9-01ec-4e85-910d-26a8ca382930/extract-utilities/0.log" Feb 16 17:53:44 crc kubenswrapper[4870]: I0216 17:53:44.212539 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d7kzc_01e5d5b9-01ec-4e85-910d-26a8ca382930/extract-content/0.log" Feb 16 17:53:44 crc kubenswrapper[4870]: E0216 17:53:44.225267 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:53:44 crc kubenswrapper[4870]: I0216 17:53:44.247988 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d7kzc_01e5d5b9-01ec-4e85-910d-26a8ca382930/extract-content/0.log" Feb 16 17:53:44 crc kubenswrapper[4870]: I0216 17:53:44.421655 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d7kzc_01e5d5b9-01ec-4e85-910d-26a8ca382930/extract-utilities/0.log" Feb 16 17:53:44 crc kubenswrapper[4870]: I0216 17:53:44.434528 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d7kzc_01e5d5b9-01ec-4e85-910d-26a8ca382930/extract-content/0.log" Feb 16 17:53:44 crc kubenswrapper[4870]: I0216 17:53:44.697479 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sz5lq_bc80d675-c023-458d-8287-3e56add1b1d2/extract-utilities/0.log" Feb 16 17:53:44 crc kubenswrapper[4870]: I0216 17:53:44.718337 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d7kzc_01e5d5b9-01ec-4e85-910d-26a8ca382930/registry-server/0.log" Feb 16 17:53:44 crc kubenswrapper[4870]: I0216 17:53:44.810383 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sz5lq_bc80d675-c023-458d-8287-3e56add1b1d2/extract-utilities/0.log" Feb 16 17:53:44 crc kubenswrapper[4870]: I0216 17:53:44.896184 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sz5lq_bc80d675-c023-458d-8287-3e56add1b1d2/extract-content/0.log" Feb 16 17:53:44 crc kubenswrapper[4870]: I0216 17:53:44.909492 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sz5lq_bc80d675-c023-458d-8287-3e56add1b1d2/extract-content/0.log" Feb 16 17:53:45 crc kubenswrapper[4870]: I0216 17:53:45.058167 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sz5lq_bc80d675-c023-458d-8287-3e56add1b1d2/extract-content/0.log" Feb 16 17:53:45 crc kubenswrapper[4870]: I0216 17:53:45.067698 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sz5lq_bc80d675-c023-458d-8287-3e56add1b1d2/extract-utilities/0.log" Feb 16 17:53:45 crc kubenswrapper[4870]: I0216 17:53:45.294667 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45_428b9999-2e8a-4a52-9f89-c71abd6cd8a2/util/0.log" Feb 16 17:53:45 crc kubenswrapper[4870]: I0216 17:53:45.518530 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45_428b9999-2e8a-4a52-9f89-c71abd6cd8a2/util/0.log" Feb 16 17:53:45 crc kubenswrapper[4870]: I0216 17:53:45.523672 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45_428b9999-2e8a-4a52-9f89-c71abd6cd8a2/pull/0.log" Feb 16 17:53:45 crc kubenswrapper[4870]: I0216 17:53:45.524225 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45_428b9999-2e8a-4a52-9f89-c71abd6cd8a2/pull/0.log" Feb 16 17:53:45 crc kubenswrapper[4870]: I0216 17:53:45.775277 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sz5lq_bc80d675-c023-458d-8287-3e56add1b1d2/registry-server/0.log" Feb 16 17:53:45 crc kubenswrapper[4870]: I0216 17:53:45.807977 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45_428b9999-2e8a-4a52-9f89-c71abd6cd8a2/util/0.log" Feb 16 17:53:45 crc kubenswrapper[4870]: I0216 17:53:45.819111 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45_428b9999-2e8a-4a52-9f89-c71abd6cd8a2/pull/0.log" Feb 16 17:53:45 crc kubenswrapper[4870]: I0216 17:53:45.876176 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecam9t45_428b9999-2e8a-4a52-9f89-c71abd6cd8a2/extract/0.log" Feb 16 17:53:45 crc kubenswrapper[4870]: I0216 17:53:45.990988 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-d4w5s_2d50d687-7be2-4b64-9b82-fe66fd2d091a/marketplace-operator/0.log" Feb 16 17:53:46 crc kubenswrapper[4870]: I0216 17:53:46.023843 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-44v98_0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0/extract-utilities/0.log" Feb 16 17:53:46 crc kubenswrapper[4870]: I0216 17:53:46.233369 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-44v98_0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0/extract-utilities/0.log" Feb 16 17:53:46 crc kubenswrapper[4870]: I0216 17:53:46.279233 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-44v98_0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0/extract-content/0.log" Feb 16 17:53:46 crc kubenswrapper[4870]: I0216 17:53:46.280053 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-44v98_0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0/extract-content/0.log" Feb 16 17:53:46 crc kubenswrapper[4870]: I0216 17:53:46.441729 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-44v98_0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0/extract-content/0.log" Feb 16 17:53:46 crc kubenswrapper[4870]: I0216 17:53:46.492014 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-44v98_0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0/extract-utilities/0.log" Feb 16 17:53:46 crc kubenswrapper[4870]: I0216 17:53:46.492238 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dwn58_0d25274f-1b87-4f2b-90aa-71dc0c0b3184/extract-utilities/0.log" Feb 16 17:53:46 crc kubenswrapper[4870]: I0216 17:53:46.603540 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-44v98_0da6a75d-9b18-4b2c-8e2f-356d5a7cd1e0/registry-server/0.log" Feb 16 17:53:46 crc kubenswrapper[4870]: I0216 17:53:46.642775 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dwn58_0d25274f-1b87-4f2b-90aa-71dc0c0b3184/extract-utilities/0.log" Feb 16 17:53:46 crc kubenswrapper[4870]: I0216 17:53:46.713489 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dwn58_0d25274f-1b87-4f2b-90aa-71dc0c0b3184/extract-content/0.log" Feb 16 17:53:46 crc kubenswrapper[4870]: I0216 17:53:46.721839 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dwn58_0d25274f-1b87-4f2b-90aa-71dc0c0b3184/extract-content/0.log" Feb 16 17:53:46 crc kubenswrapper[4870]: I0216 17:53:46.900095 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dwn58_0d25274f-1b87-4f2b-90aa-71dc0c0b3184/extract-content/0.log" Feb 16 17:53:46 crc kubenswrapper[4870]: I0216 17:53:46.901914 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dwn58_0d25274f-1b87-4f2b-90aa-71dc0c0b3184/extract-utilities/0.log" Feb 16 17:53:47 crc kubenswrapper[4870]: I0216 17:53:47.406329 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dwn58_0d25274f-1b87-4f2b-90aa-71dc0c0b3184/registry-server/0.log" Feb 16 17:53:57 crc kubenswrapper[4870]: E0216 17:53:57.225747 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:53:59 crc kubenswrapper[4870]: I0216 17:53:59.106005 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5df4dc8d89-qkhbs_063bbc92-30f0-4cb3-9f15-8f303b2fe4d0/prometheus-operator-admission-webhook/0.log" Feb 16 17:53:59 crc kubenswrapper[4870]: I0216 17:53:59.144166 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-v797k_d26622cb-47bc-4378-867b-abad855869a5/prometheus-operator/0.log" Feb 16 17:53:59 crc kubenswrapper[4870]: I0216 17:53:59.189026 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5df4dc8d89-9d58x_10dfd4ae-ed85-4ae3-92c2-2094c9a6cd1a/prometheus-operator-admission-webhook/0.log" Feb 16 17:53:59 crc kubenswrapper[4870]: I0216 17:53:59.332968 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7mxf6_7726c8d8-365d-4b95-9b6c-2c95c221f1f4/operator/0.log" Feb 16 17:53:59 crc kubenswrapper[4870]: I0216 17:53:59.355231 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-pbwnb_02ecd319-55c1-4189-bf38-35f08025630c/perses-operator/0.log" Feb 16 17:54:10 crc kubenswrapper[4870]: E0216 17:54:10.225019 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:54:12 crc kubenswrapper[4870]: I0216 17:54:12.639220 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-797c678dc4-pv4ll_79183cb8-6455-4b66-9732-d3eb9604ab48/kube-rbac-proxy/0.log" Feb 16 17:54:12 crc kubenswrapper[4870]: I0216 17:54:12.714414 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-797c678dc4-pv4ll_79183cb8-6455-4b66-9732-d3eb9604ab48/manager/0.log" Feb 16 17:54:23 crc kubenswrapper[4870]: E0216 17:54:23.225301 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:54:35 crc kubenswrapper[4870]: E0216 17:54:35.225126 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:54:47 crc kubenswrapper[4870]: E0216 17:54:47.226158 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:55:01 crc kubenswrapper[4870]: E0216 17:55:01.225938 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:55:12 crc kubenswrapper[4870]: E0216 17:55:12.225509 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:55:27 crc kubenswrapper[4870]: E0216 17:55:27.225593 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:55:35 crc kubenswrapper[4870]: I0216 17:55:35.366900 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:55:35 crc kubenswrapper[4870]: I0216 17:55:35.369157 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:55:39 crc kubenswrapper[4870]: E0216 17:55:39.225295 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:55:46 crc kubenswrapper[4870]: I0216 17:55:46.856466 4870 generic.go:334] "Generic (PLEG): container finished" podID="85f69a46-bfb5-43f0-829b-e54d8ea13b95" containerID="da74a5ef8533a7e17896bdca5375a98f1fae09c2160cd2544b7229c201801447" exitCode=0 Feb 16 17:55:46 crc kubenswrapper[4870]: I0216 17:55:46.856573 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jd9s2/must-gather-wl9b5" event={"ID":"85f69a46-bfb5-43f0-829b-e54d8ea13b95","Type":"ContainerDied","Data":"da74a5ef8533a7e17896bdca5375a98f1fae09c2160cd2544b7229c201801447"} Feb 16 17:55:46 crc kubenswrapper[4870]: I0216 17:55:46.857663 4870 scope.go:117] "RemoveContainer" containerID="da74a5ef8533a7e17896bdca5375a98f1fae09c2160cd2544b7229c201801447" Feb 16 17:55:47 crc kubenswrapper[4870]: I0216 17:55:47.006813 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jd9s2_must-gather-wl9b5_85f69a46-bfb5-43f0-829b-e54d8ea13b95/gather/0.log" Feb 16 17:55:50 crc kubenswrapper[4870]: E0216 17:55:50.224829 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:55:54 crc kubenswrapper[4870]: I0216 17:55:54.139309 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jd9s2/must-gather-wl9b5"] Feb 16 17:55:54 crc kubenswrapper[4870]: I0216 17:55:54.139994 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jd9s2/must-gather-wl9b5" podUID="85f69a46-bfb5-43f0-829b-e54d8ea13b95" containerName="copy" containerID="cri-o://387ee1a7d94af3f34840651adc64e97c70ccad7039ca033be98716090bf666a7" gracePeriod=2 Feb 16 17:55:54 crc kubenswrapper[4870]: I0216 17:55:54.153276 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jd9s2/must-gather-wl9b5"] Feb 16 17:55:54 crc kubenswrapper[4870]: I0216 17:55:54.770040 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jd9s2_must-gather-wl9b5_85f69a46-bfb5-43f0-829b-e54d8ea13b95/copy/0.log" Feb 16 17:55:54 crc kubenswrapper[4870]: I0216 17:55:54.770713 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jd9s2/must-gather-wl9b5" Feb 16 17:55:54 crc kubenswrapper[4870]: I0216 17:55:54.873213 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxp7q\" (UniqueName: \"kubernetes.io/projected/85f69a46-bfb5-43f0-829b-e54d8ea13b95-kube-api-access-rxp7q\") pod \"85f69a46-bfb5-43f0-829b-e54d8ea13b95\" (UID: \"85f69a46-bfb5-43f0-829b-e54d8ea13b95\") " Feb 16 17:55:54 crc kubenswrapper[4870]: I0216 17:55:54.873494 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/85f69a46-bfb5-43f0-829b-e54d8ea13b95-must-gather-output\") pod \"85f69a46-bfb5-43f0-829b-e54d8ea13b95\" (UID: \"85f69a46-bfb5-43f0-829b-e54d8ea13b95\") " Feb 16 17:55:54 crc kubenswrapper[4870]: I0216 17:55:54.881413 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85f69a46-bfb5-43f0-829b-e54d8ea13b95-kube-api-access-rxp7q" (OuterVolumeSpecName: "kube-api-access-rxp7q") pod "85f69a46-bfb5-43f0-829b-e54d8ea13b95" (UID: "85f69a46-bfb5-43f0-829b-e54d8ea13b95"). InnerVolumeSpecName "kube-api-access-rxp7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:55:54 crc kubenswrapper[4870]: I0216 17:55:54.929063 4870 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jd9s2_must-gather-wl9b5_85f69a46-bfb5-43f0-829b-e54d8ea13b95/copy/0.log" Feb 16 17:55:54 crc kubenswrapper[4870]: I0216 17:55:54.930174 4870 generic.go:334] "Generic (PLEG): container finished" podID="85f69a46-bfb5-43f0-829b-e54d8ea13b95" containerID="387ee1a7d94af3f34840651adc64e97c70ccad7039ca033be98716090bf666a7" exitCode=143 Feb 16 17:55:54 crc kubenswrapper[4870]: I0216 17:55:54.930234 4870 scope.go:117] "RemoveContainer" containerID="387ee1a7d94af3f34840651adc64e97c70ccad7039ca033be98716090bf666a7" Feb 16 17:55:54 crc kubenswrapper[4870]: I0216 17:55:54.930385 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jd9s2/must-gather-wl9b5" Feb 16 17:55:54 crc kubenswrapper[4870]: I0216 17:55:54.959627 4870 scope.go:117] "RemoveContainer" containerID="da74a5ef8533a7e17896bdca5375a98f1fae09c2160cd2544b7229c201801447" Feb 16 17:55:54 crc kubenswrapper[4870]: I0216 17:55:54.975835 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxp7q\" (UniqueName: \"kubernetes.io/projected/85f69a46-bfb5-43f0-829b-e54d8ea13b95-kube-api-access-rxp7q\") on node \"crc\" DevicePath \"\"" Feb 16 17:55:55 crc kubenswrapper[4870]: I0216 17:55:55.023232 4870 scope.go:117] "RemoveContainer" containerID="387ee1a7d94af3f34840651adc64e97c70ccad7039ca033be98716090bf666a7" Feb 16 17:55:55 crc kubenswrapper[4870]: E0216 17:55:55.023893 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"387ee1a7d94af3f34840651adc64e97c70ccad7039ca033be98716090bf666a7\": container with ID starting with 387ee1a7d94af3f34840651adc64e97c70ccad7039ca033be98716090bf666a7 not found: ID does not exist" containerID="387ee1a7d94af3f34840651adc64e97c70ccad7039ca033be98716090bf666a7" Feb 16 17:55:55 crc kubenswrapper[4870]: I0216 17:55:55.023941 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"387ee1a7d94af3f34840651adc64e97c70ccad7039ca033be98716090bf666a7"} err="failed to get container status \"387ee1a7d94af3f34840651adc64e97c70ccad7039ca033be98716090bf666a7\": rpc error: code = NotFound desc = could not find container \"387ee1a7d94af3f34840651adc64e97c70ccad7039ca033be98716090bf666a7\": container with ID starting with 387ee1a7d94af3f34840651adc64e97c70ccad7039ca033be98716090bf666a7 not found: ID does not exist" Feb 16 17:55:55 crc kubenswrapper[4870]: I0216 17:55:55.024073 4870 scope.go:117] "RemoveContainer" containerID="da74a5ef8533a7e17896bdca5375a98f1fae09c2160cd2544b7229c201801447" Feb 16 17:55:55 crc kubenswrapper[4870]: E0216 17:55:55.024440 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da74a5ef8533a7e17896bdca5375a98f1fae09c2160cd2544b7229c201801447\": container with ID starting with da74a5ef8533a7e17896bdca5375a98f1fae09c2160cd2544b7229c201801447 not found: ID does not exist" containerID="da74a5ef8533a7e17896bdca5375a98f1fae09c2160cd2544b7229c201801447" Feb 16 17:55:55 crc kubenswrapper[4870]: I0216 17:55:55.024460 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da74a5ef8533a7e17896bdca5375a98f1fae09c2160cd2544b7229c201801447"} err="failed to get container status \"da74a5ef8533a7e17896bdca5375a98f1fae09c2160cd2544b7229c201801447\": rpc error: code = NotFound desc = could not find container \"da74a5ef8533a7e17896bdca5375a98f1fae09c2160cd2544b7229c201801447\": container with ID starting with da74a5ef8533a7e17896bdca5375a98f1fae09c2160cd2544b7229c201801447 not found: ID does not exist" Feb 16 17:55:55 crc kubenswrapper[4870]: I0216 17:55:55.037747 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85f69a46-bfb5-43f0-829b-e54d8ea13b95-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "85f69a46-bfb5-43f0-829b-e54d8ea13b95" (UID: "85f69a46-bfb5-43f0-829b-e54d8ea13b95"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:55:55 crc kubenswrapper[4870]: I0216 17:55:55.077510 4870 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/85f69a46-bfb5-43f0-829b-e54d8ea13b95-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 17:55:56 crc kubenswrapper[4870]: I0216 17:55:56.234459 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85f69a46-bfb5-43f0-829b-e54d8ea13b95" path="/var/lib/kubelet/pods/85f69a46-bfb5-43f0-829b-e54d8ea13b95/volumes" Feb 16 17:56:02 crc kubenswrapper[4870]: E0216 17:56:02.225204 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:56:05 crc kubenswrapper[4870]: I0216 17:56:05.366595 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:56:05 crc kubenswrapper[4870]: I0216 17:56:05.366971 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:56:13 crc kubenswrapper[4870]: E0216 17:56:13.225332 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:56:15 crc kubenswrapper[4870]: I0216 17:56:15.807829 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f74xs"] Feb 16 17:56:15 crc kubenswrapper[4870]: E0216 17:56:15.808443 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f69a46-bfb5-43f0-829b-e54d8ea13b95" containerName="gather" Feb 16 17:56:15 crc kubenswrapper[4870]: I0216 17:56:15.808457 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f69a46-bfb5-43f0-829b-e54d8ea13b95" containerName="gather" Feb 16 17:56:15 crc kubenswrapper[4870]: E0216 17:56:15.808482 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f69a46-bfb5-43f0-829b-e54d8ea13b95" containerName="copy" Feb 16 17:56:15 crc kubenswrapper[4870]: I0216 17:56:15.808488 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f69a46-bfb5-43f0-829b-e54d8ea13b95" containerName="copy" Feb 16 17:56:15 crc kubenswrapper[4870]: E0216 17:56:15.808505 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd0b277-02c6-47c0-8586-b6f05c6b4576" containerName="container-00" Feb 16 17:56:15 crc kubenswrapper[4870]: I0216 17:56:15.808513 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd0b277-02c6-47c0-8586-b6f05c6b4576" containerName="container-00" Feb 16 17:56:15 crc kubenswrapper[4870]: I0216 17:56:15.808687 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="85f69a46-bfb5-43f0-829b-e54d8ea13b95" containerName="gather" Feb 16 17:56:15 crc kubenswrapper[4870]: I0216 17:56:15.808707 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="afd0b277-02c6-47c0-8586-b6f05c6b4576" containerName="container-00" Feb 16 17:56:15 crc kubenswrapper[4870]: I0216 17:56:15.808719 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="85f69a46-bfb5-43f0-829b-e54d8ea13b95" containerName="copy" Feb 16 17:56:15 crc kubenswrapper[4870]: I0216 17:56:15.810883 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f74xs" Feb 16 17:56:15 crc kubenswrapper[4870]: I0216 17:56:15.839402 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f74xs"] Feb 16 17:56:15 crc kubenswrapper[4870]: I0216 17:56:15.909882 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dzww\" (UniqueName: \"kubernetes.io/projected/cde30d76-1a57-4db4-ab1e-08e0d98b5abe-kube-api-access-7dzww\") pod \"redhat-operators-f74xs\" (UID: \"cde30d76-1a57-4db4-ab1e-08e0d98b5abe\") " pod="openshift-marketplace/redhat-operators-f74xs" Feb 16 17:56:15 crc kubenswrapper[4870]: I0216 17:56:15.910023 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde30d76-1a57-4db4-ab1e-08e0d98b5abe-utilities\") pod \"redhat-operators-f74xs\" (UID: \"cde30d76-1a57-4db4-ab1e-08e0d98b5abe\") " pod="openshift-marketplace/redhat-operators-f74xs" Feb 16 17:56:15 crc kubenswrapper[4870]: I0216 17:56:15.910118 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde30d76-1a57-4db4-ab1e-08e0d98b5abe-catalog-content\") pod \"redhat-operators-f74xs\" (UID: \"cde30d76-1a57-4db4-ab1e-08e0d98b5abe\") " pod="openshift-marketplace/redhat-operators-f74xs" Feb 16 17:56:15 crc kubenswrapper[4870]: I0216 17:56:15.994054 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p4zpb"] Feb 16 17:56:15 crc kubenswrapper[4870]: I0216 17:56:15.996905 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4zpb" Feb 16 17:56:16 crc kubenswrapper[4870]: I0216 17:56:16.011621 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dzww\" (UniqueName: \"kubernetes.io/projected/cde30d76-1a57-4db4-ab1e-08e0d98b5abe-kube-api-access-7dzww\") pod \"redhat-operators-f74xs\" (UID: \"cde30d76-1a57-4db4-ab1e-08e0d98b5abe\") " pod="openshift-marketplace/redhat-operators-f74xs" Feb 16 17:56:16 crc kubenswrapper[4870]: I0216 17:56:16.011678 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde30d76-1a57-4db4-ab1e-08e0d98b5abe-utilities\") pod \"redhat-operators-f74xs\" (UID: \"cde30d76-1a57-4db4-ab1e-08e0d98b5abe\") " pod="openshift-marketplace/redhat-operators-f74xs" Feb 16 17:56:16 crc kubenswrapper[4870]: I0216 17:56:16.011737 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde30d76-1a57-4db4-ab1e-08e0d98b5abe-catalog-content\") pod \"redhat-operators-f74xs\" (UID: \"cde30d76-1a57-4db4-ab1e-08e0d98b5abe\") " pod="openshift-marketplace/redhat-operators-f74xs" Feb 16 17:56:16 crc kubenswrapper[4870]: I0216 17:56:16.012373 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde30d76-1a57-4db4-ab1e-08e0d98b5abe-catalog-content\") pod \"redhat-operators-f74xs\" (UID: \"cde30d76-1a57-4db4-ab1e-08e0d98b5abe\") " pod="openshift-marketplace/redhat-operators-f74xs" Feb 16 17:56:16 crc kubenswrapper[4870]: I0216 17:56:16.012825 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde30d76-1a57-4db4-ab1e-08e0d98b5abe-utilities\") pod \"redhat-operators-f74xs\" (UID: \"cde30d76-1a57-4db4-ab1e-08e0d98b5abe\") " pod="openshift-marketplace/redhat-operators-f74xs" Feb 16 17:56:16 crc kubenswrapper[4870]: I0216 17:56:16.035290 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dzww\" (UniqueName: \"kubernetes.io/projected/cde30d76-1a57-4db4-ab1e-08e0d98b5abe-kube-api-access-7dzww\") pod \"redhat-operators-f74xs\" (UID: \"cde30d76-1a57-4db4-ab1e-08e0d98b5abe\") " pod="openshift-marketplace/redhat-operators-f74xs" Feb 16 17:56:16 crc kubenswrapper[4870]: I0216 17:56:16.054036 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4zpb"] Feb 16 17:56:16 crc kubenswrapper[4870]: I0216 17:56:16.114007 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26978898-68c4-4a89-a171-78245fdc9ffd-catalog-content\") pod \"redhat-marketplace-p4zpb\" (UID: \"26978898-68c4-4a89-a171-78245fdc9ffd\") " pod="openshift-marketplace/redhat-marketplace-p4zpb" Feb 16 17:56:16 crc kubenswrapper[4870]: I0216 17:56:16.114071 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97cwr\" (UniqueName: \"kubernetes.io/projected/26978898-68c4-4a89-a171-78245fdc9ffd-kube-api-access-97cwr\") pod \"redhat-marketplace-p4zpb\" (UID: \"26978898-68c4-4a89-a171-78245fdc9ffd\") " pod="openshift-marketplace/redhat-marketplace-p4zpb" Feb 16 17:56:16 crc kubenswrapper[4870]: I0216 17:56:16.114340 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26978898-68c4-4a89-a171-78245fdc9ffd-utilities\") pod \"redhat-marketplace-p4zpb\" (UID: \"26978898-68c4-4a89-a171-78245fdc9ffd\") " pod="openshift-marketplace/redhat-marketplace-p4zpb" Feb 16 17:56:16 crc kubenswrapper[4870]: I0216 17:56:16.141674 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f74xs" Feb 16 17:56:16 crc kubenswrapper[4870]: I0216 17:56:16.216466 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26978898-68c4-4a89-a171-78245fdc9ffd-utilities\") pod \"redhat-marketplace-p4zpb\" (UID: \"26978898-68c4-4a89-a171-78245fdc9ffd\") " pod="openshift-marketplace/redhat-marketplace-p4zpb" Feb 16 17:56:16 crc kubenswrapper[4870]: I0216 17:56:16.216689 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26978898-68c4-4a89-a171-78245fdc9ffd-catalog-content\") pod \"redhat-marketplace-p4zpb\" (UID: \"26978898-68c4-4a89-a171-78245fdc9ffd\") " pod="openshift-marketplace/redhat-marketplace-p4zpb" Feb 16 17:56:16 crc kubenswrapper[4870]: I0216 17:56:16.216738 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97cwr\" (UniqueName: \"kubernetes.io/projected/26978898-68c4-4a89-a171-78245fdc9ffd-kube-api-access-97cwr\") pod \"redhat-marketplace-p4zpb\" (UID: \"26978898-68c4-4a89-a171-78245fdc9ffd\") " pod="openshift-marketplace/redhat-marketplace-p4zpb" Feb 16 17:56:16 crc kubenswrapper[4870]: I0216 17:56:16.217572 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26978898-68c4-4a89-a171-78245fdc9ffd-utilities\") pod \"redhat-marketplace-p4zpb\" (UID: \"26978898-68c4-4a89-a171-78245fdc9ffd\") " pod="openshift-marketplace/redhat-marketplace-p4zpb" Feb 16 17:56:16 crc kubenswrapper[4870]: I0216 17:56:16.217839 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26978898-68c4-4a89-a171-78245fdc9ffd-catalog-content\") pod \"redhat-marketplace-p4zpb\" (UID: \"26978898-68c4-4a89-a171-78245fdc9ffd\") " pod="openshift-marketplace/redhat-marketplace-p4zpb" Feb 16 17:56:16 crc kubenswrapper[4870]: I0216 17:56:16.242824 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97cwr\" (UniqueName: \"kubernetes.io/projected/26978898-68c4-4a89-a171-78245fdc9ffd-kube-api-access-97cwr\") pod \"redhat-marketplace-p4zpb\" (UID: \"26978898-68c4-4a89-a171-78245fdc9ffd\") " pod="openshift-marketplace/redhat-marketplace-p4zpb" Feb 16 17:56:16 crc kubenswrapper[4870]: I0216 17:56:16.317510 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4zpb" Feb 16 17:56:16 crc kubenswrapper[4870]: I0216 17:56:16.691188 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f74xs"] Feb 16 17:56:16 crc kubenswrapper[4870]: I0216 17:56:16.866206 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4zpb"] Feb 16 17:56:17 crc kubenswrapper[4870]: I0216 17:56:17.130615 4870 generic.go:334] "Generic (PLEG): container finished" podID="26978898-68c4-4a89-a171-78245fdc9ffd" containerID="e3c2f64be0df164aeaccc6c1f1b831598a981fcb446d693aac7c1c2279e5ac61" exitCode=0 Feb 16 17:56:17 crc kubenswrapper[4870]: I0216 17:56:17.130691 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4zpb" event={"ID":"26978898-68c4-4a89-a171-78245fdc9ffd","Type":"ContainerDied","Data":"e3c2f64be0df164aeaccc6c1f1b831598a981fcb446d693aac7c1c2279e5ac61"} Feb 16 17:56:17 crc kubenswrapper[4870]: I0216 17:56:17.130718 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4zpb" event={"ID":"26978898-68c4-4a89-a171-78245fdc9ffd","Type":"ContainerStarted","Data":"c24bd72c3fcc7f41b337980825b8e3741a4c5071e5905feea70ed39dff7e5232"} Feb 16 17:56:17 crc kubenswrapper[4870]: I0216 17:56:17.132125 4870 generic.go:334] "Generic (PLEG): container finished" podID="cde30d76-1a57-4db4-ab1e-08e0d98b5abe" containerID="0aa6a31d0c25740c4f72465130a7a566558abc564f1fa8f57478e1d74fedcd5f" exitCode=0 Feb 16 17:56:17 crc kubenswrapper[4870]: I0216 17:56:17.132164 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f74xs" event={"ID":"cde30d76-1a57-4db4-ab1e-08e0d98b5abe","Type":"ContainerDied","Data":"0aa6a31d0c25740c4f72465130a7a566558abc564f1fa8f57478e1d74fedcd5f"} Feb 16 17:56:17 crc kubenswrapper[4870]: I0216 17:56:17.132191 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f74xs" event={"ID":"cde30d76-1a57-4db4-ab1e-08e0d98b5abe","Type":"ContainerStarted","Data":"027f7b77f2a923400e552ce4df4e52eade26ad1d95e3b987153b2f0ee19a3c55"} Feb 16 17:56:18 crc kubenswrapper[4870]: I0216 17:56:18.144326 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f74xs" event={"ID":"cde30d76-1a57-4db4-ab1e-08e0d98b5abe","Type":"ContainerStarted","Data":"596673e5cd9a01f46548e7e1709c425250a6e74291ac1a37402b3f03ee070313"} Feb 16 17:56:18 crc kubenswrapper[4870]: I0216 17:56:18.146271 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4zpb" event={"ID":"26978898-68c4-4a89-a171-78245fdc9ffd","Type":"ContainerStarted","Data":"fc0bfb8f4143f45ffb39d84f0024abc99ee726a0962d965d93d135b5afa4e157"} Feb 16 17:56:18 crc kubenswrapper[4870]: I0216 17:56:18.195864 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xg29l"] Feb 16 17:56:18 crc kubenswrapper[4870]: I0216 17:56:18.197764 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xg29l" Feb 16 17:56:18 crc kubenswrapper[4870]: I0216 17:56:18.216753 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xg29l"] Feb 16 17:56:18 crc kubenswrapper[4870]: I0216 17:56:18.263868 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88b85e09-9bfa-42a0-84ec-06f45a67a5f5-catalog-content\") pod \"community-operators-xg29l\" (UID: \"88b85e09-9bfa-42a0-84ec-06f45a67a5f5\") " pod="openshift-marketplace/community-operators-xg29l" Feb 16 17:56:18 crc kubenswrapper[4870]: I0216 17:56:18.264176 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdms5\" (UniqueName: \"kubernetes.io/projected/88b85e09-9bfa-42a0-84ec-06f45a67a5f5-kube-api-access-fdms5\") pod \"community-operators-xg29l\" (UID: \"88b85e09-9bfa-42a0-84ec-06f45a67a5f5\") " pod="openshift-marketplace/community-operators-xg29l" Feb 16 17:56:18 crc kubenswrapper[4870]: I0216 17:56:18.264204 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88b85e09-9bfa-42a0-84ec-06f45a67a5f5-utilities\") pod \"community-operators-xg29l\" (UID: \"88b85e09-9bfa-42a0-84ec-06f45a67a5f5\") " pod="openshift-marketplace/community-operators-xg29l" Feb 16 17:56:18 crc kubenswrapper[4870]: I0216 17:56:18.365940 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdms5\" (UniqueName: \"kubernetes.io/projected/88b85e09-9bfa-42a0-84ec-06f45a67a5f5-kube-api-access-fdms5\") pod \"community-operators-xg29l\" (UID: \"88b85e09-9bfa-42a0-84ec-06f45a67a5f5\") " pod="openshift-marketplace/community-operators-xg29l" Feb 16 17:56:18 crc kubenswrapper[4870]: I0216 17:56:18.366032 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88b85e09-9bfa-42a0-84ec-06f45a67a5f5-utilities\") pod \"community-operators-xg29l\" (UID: \"88b85e09-9bfa-42a0-84ec-06f45a67a5f5\") " pod="openshift-marketplace/community-operators-xg29l" Feb 16 17:56:18 crc kubenswrapper[4870]: I0216 17:56:18.366099 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88b85e09-9bfa-42a0-84ec-06f45a67a5f5-catalog-content\") pod \"community-operators-xg29l\" (UID: \"88b85e09-9bfa-42a0-84ec-06f45a67a5f5\") " pod="openshift-marketplace/community-operators-xg29l" Feb 16 17:56:18 crc kubenswrapper[4870]: I0216 17:56:18.366886 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88b85e09-9bfa-42a0-84ec-06f45a67a5f5-catalog-content\") pod \"community-operators-xg29l\" (UID: \"88b85e09-9bfa-42a0-84ec-06f45a67a5f5\") " pod="openshift-marketplace/community-operators-xg29l" Feb 16 17:56:18 crc kubenswrapper[4870]: I0216 17:56:18.367121 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88b85e09-9bfa-42a0-84ec-06f45a67a5f5-utilities\") pod \"community-operators-xg29l\" (UID: \"88b85e09-9bfa-42a0-84ec-06f45a67a5f5\") " pod="openshift-marketplace/community-operators-xg29l" Feb 16 17:56:18 crc kubenswrapper[4870]: I0216 17:56:18.391116 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdms5\" (UniqueName: \"kubernetes.io/projected/88b85e09-9bfa-42a0-84ec-06f45a67a5f5-kube-api-access-fdms5\") pod \"community-operators-xg29l\" (UID: \"88b85e09-9bfa-42a0-84ec-06f45a67a5f5\") " pod="openshift-marketplace/community-operators-xg29l" Feb 16 17:56:18 crc kubenswrapper[4870]: I0216 17:56:18.516774 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xg29l" Feb 16 17:56:19 crc kubenswrapper[4870]: I0216 17:56:19.040834 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xg29l"] Feb 16 17:56:19 crc kubenswrapper[4870]: W0216 17:56:19.041049 4870 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88b85e09_9bfa_42a0_84ec_06f45a67a5f5.slice/crio-d61b43057f2fcf0b5fc3608ffaec623924ec5baa1af4b74987b68e0d7dfd3302 WatchSource:0}: Error finding container d61b43057f2fcf0b5fc3608ffaec623924ec5baa1af4b74987b68e0d7dfd3302: Status 404 returned error can't find the container with id d61b43057f2fcf0b5fc3608ffaec623924ec5baa1af4b74987b68e0d7dfd3302 Feb 16 17:56:19 crc kubenswrapper[4870]: I0216 17:56:19.155446 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xg29l" event={"ID":"88b85e09-9bfa-42a0-84ec-06f45a67a5f5","Type":"ContainerStarted","Data":"d61b43057f2fcf0b5fc3608ffaec623924ec5baa1af4b74987b68e0d7dfd3302"} Feb 16 17:56:19 crc kubenswrapper[4870]: I0216 17:56:19.157580 4870 generic.go:334] "Generic (PLEG): container finished" podID="26978898-68c4-4a89-a171-78245fdc9ffd" containerID="fc0bfb8f4143f45ffb39d84f0024abc99ee726a0962d965d93d135b5afa4e157" exitCode=0 Feb 16 17:56:19 crc kubenswrapper[4870]: I0216 17:56:19.157687 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4zpb" event={"ID":"26978898-68c4-4a89-a171-78245fdc9ffd","Type":"ContainerDied","Data":"fc0bfb8f4143f45ffb39d84f0024abc99ee726a0962d965d93d135b5afa4e157"} Feb 16 17:56:20 crc kubenswrapper[4870]: I0216 17:56:20.167938 4870 generic.go:334] "Generic (PLEG): container finished" podID="88b85e09-9bfa-42a0-84ec-06f45a67a5f5" containerID="0cba3f35a9ac92b95ea9778542da667f6f0a7bbfa8d7f5c457bc0d12492ae91f" exitCode=0 Feb 16 17:56:20 crc kubenswrapper[4870]: I0216 17:56:20.168138 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xg29l" event={"ID":"88b85e09-9bfa-42a0-84ec-06f45a67a5f5","Type":"ContainerDied","Data":"0cba3f35a9ac92b95ea9778542da667f6f0a7bbfa8d7f5c457bc0d12492ae91f"} Feb 16 17:56:20 crc kubenswrapper[4870]: I0216 17:56:20.175216 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4zpb" event={"ID":"26978898-68c4-4a89-a171-78245fdc9ffd","Type":"ContainerStarted","Data":"32ac38a0f0bd28c09e432c637d09b5dfac7977ec4428618467db0d36f7792928"} Feb 16 17:56:20 crc kubenswrapper[4870]: I0216 17:56:20.212937 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p4zpb" podStartSLOduration=2.780406213 podStartE2EDuration="5.212918186s" podCreationTimestamp="2026-02-16 17:56:15 +0000 UTC" firstStartedPulling="2026-02-16 17:56:17.133839829 +0000 UTC m=+3381.617304213" lastFinishedPulling="2026-02-16 17:56:19.566351792 +0000 UTC m=+3384.049816186" observedRunningTime="2026-02-16 17:56:20.209683134 +0000 UTC m=+3384.693147518" watchObservedRunningTime="2026-02-16 17:56:20.212918186 +0000 UTC m=+3384.696382570" Feb 16 17:56:22 crc kubenswrapper[4870]: I0216 17:56:22.196529 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xg29l" event={"ID":"88b85e09-9bfa-42a0-84ec-06f45a67a5f5","Type":"ContainerStarted","Data":"9633f56a7509845bcbdf5a9670172257bd54233625c11ccc59910219d1f6c9dd"} Feb 16 17:56:23 crc kubenswrapper[4870]: I0216 17:56:23.205606 4870 generic.go:334] "Generic (PLEG): container finished" podID="cde30d76-1a57-4db4-ab1e-08e0d98b5abe" containerID="596673e5cd9a01f46548e7e1709c425250a6e74291ac1a37402b3f03ee070313" exitCode=0 Feb 16 17:56:23 crc kubenswrapper[4870]: I0216 17:56:23.205651 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f74xs" event={"ID":"cde30d76-1a57-4db4-ab1e-08e0d98b5abe","Type":"ContainerDied","Data":"596673e5cd9a01f46548e7e1709c425250a6e74291ac1a37402b3f03ee070313"} Feb 16 17:56:24 crc kubenswrapper[4870]: E0216 17:56:24.257744 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:56:25 crc kubenswrapper[4870]: I0216 17:56:25.223959 4870 generic.go:334] "Generic (PLEG): container finished" podID="88b85e09-9bfa-42a0-84ec-06f45a67a5f5" containerID="9633f56a7509845bcbdf5a9670172257bd54233625c11ccc59910219d1f6c9dd" exitCode=0 Feb 16 17:56:25 crc kubenswrapper[4870]: I0216 17:56:25.223975 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xg29l" event={"ID":"88b85e09-9bfa-42a0-84ec-06f45a67a5f5","Type":"ContainerDied","Data":"9633f56a7509845bcbdf5a9670172257bd54233625c11ccc59910219d1f6c9dd"} Feb 16 17:56:25 crc kubenswrapper[4870]: I0216 17:56:25.229022 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f74xs" event={"ID":"cde30d76-1a57-4db4-ab1e-08e0d98b5abe","Type":"ContainerStarted","Data":"db42a0c86f60495b39d8290092191fdcff4184ed150e038355572dda481dcb97"} Feb 16 17:56:25 crc kubenswrapper[4870]: I0216 17:56:25.262492 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f74xs" podStartSLOduration=2.946646066 podStartE2EDuration="10.262474757s" podCreationTimestamp="2026-02-16 17:56:15 +0000 UTC" firstStartedPulling="2026-02-16 17:56:17.133838639 +0000 UTC m=+3381.617303023" lastFinishedPulling="2026-02-16 17:56:24.44966732 +0000 UTC m=+3388.933131714" observedRunningTime="2026-02-16 17:56:25.258784932 +0000 UTC m=+3389.742249316" watchObservedRunningTime="2026-02-16 17:56:25.262474757 +0000 UTC m=+3389.745939131" Feb 16 17:56:26 crc kubenswrapper[4870]: I0216 17:56:26.142961 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f74xs" Feb 16 17:56:26 crc kubenswrapper[4870]: I0216 17:56:26.143264 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f74xs" Feb 16 17:56:26 crc kubenswrapper[4870]: I0216 17:56:26.242515 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xg29l" event={"ID":"88b85e09-9bfa-42a0-84ec-06f45a67a5f5","Type":"ContainerStarted","Data":"94a12a194a2a157629bda68c5b18b0c1f5342a47c376ad33bbf5fc25bb86b795"} Feb 16 17:56:26 crc kubenswrapper[4870]: I0216 17:56:26.265937 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xg29l" podStartSLOduration=2.8319013650000002 podStartE2EDuration="8.265917761s" podCreationTimestamp="2026-02-16 17:56:18 +0000 UTC" firstStartedPulling="2026-02-16 17:56:20.171371635 +0000 UTC m=+3384.654836029" lastFinishedPulling="2026-02-16 17:56:25.605388041 +0000 UTC m=+3390.088852425" observedRunningTime="2026-02-16 17:56:26.262066012 +0000 UTC m=+3390.745530396" watchObservedRunningTime="2026-02-16 17:56:26.265917761 +0000 UTC m=+3390.749382145" Feb 16 17:56:26 crc kubenswrapper[4870]: I0216 17:56:26.318838 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p4zpb" Feb 16 17:56:26 crc kubenswrapper[4870]: I0216 17:56:26.319173 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p4zpb" Feb 16 17:56:26 crc kubenswrapper[4870]: I0216 17:56:26.365375 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p4zpb" Feb 16 17:56:27 crc kubenswrapper[4870]: I0216 17:56:27.225099 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f74xs" podUID="cde30d76-1a57-4db4-ab1e-08e0d98b5abe" containerName="registry-server" probeResult="failure" output=< Feb 16 17:56:27 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Feb 16 17:56:27 crc kubenswrapper[4870]: > Feb 16 17:56:27 crc kubenswrapper[4870]: I0216 17:56:27.298788 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p4zpb" Feb 16 17:56:28 crc kubenswrapper[4870]: I0216 17:56:28.517578 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xg29l" Feb 16 17:56:28 crc kubenswrapper[4870]: I0216 17:56:28.517625 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xg29l" Feb 16 17:56:28 crc kubenswrapper[4870]: I0216 17:56:28.573736 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xg29l" Feb 16 17:56:29 crc kubenswrapper[4870]: I0216 17:56:29.592505 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4zpb"] Feb 16 17:56:29 crc kubenswrapper[4870]: I0216 17:56:29.593140 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p4zpb" podUID="26978898-68c4-4a89-a171-78245fdc9ffd" containerName="registry-server" containerID="cri-o://32ac38a0f0bd28c09e432c637d09b5dfac7977ec4428618467db0d36f7792928" gracePeriod=2 Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.094887 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4zpb" Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.226671 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26978898-68c4-4a89-a171-78245fdc9ffd-utilities\") pod \"26978898-68c4-4a89-a171-78245fdc9ffd\" (UID: \"26978898-68c4-4a89-a171-78245fdc9ffd\") " Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.227039 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97cwr\" (UniqueName: \"kubernetes.io/projected/26978898-68c4-4a89-a171-78245fdc9ffd-kube-api-access-97cwr\") pod \"26978898-68c4-4a89-a171-78245fdc9ffd\" (UID: \"26978898-68c4-4a89-a171-78245fdc9ffd\") " Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.227096 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26978898-68c4-4a89-a171-78245fdc9ffd-catalog-content\") pod \"26978898-68c4-4a89-a171-78245fdc9ffd\" (UID: \"26978898-68c4-4a89-a171-78245fdc9ffd\") " Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.227397 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26978898-68c4-4a89-a171-78245fdc9ffd-utilities" (OuterVolumeSpecName: "utilities") pod "26978898-68c4-4a89-a171-78245fdc9ffd" (UID: "26978898-68c4-4a89-a171-78245fdc9ffd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.227862 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26978898-68c4-4a89-a171-78245fdc9ffd-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.237221 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26978898-68c4-4a89-a171-78245fdc9ffd-kube-api-access-97cwr" (OuterVolumeSpecName: "kube-api-access-97cwr") pod "26978898-68c4-4a89-a171-78245fdc9ffd" (UID: "26978898-68c4-4a89-a171-78245fdc9ffd"). InnerVolumeSpecName "kube-api-access-97cwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.250615 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26978898-68c4-4a89-a171-78245fdc9ffd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26978898-68c4-4a89-a171-78245fdc9ffd" (UID: "26978898-68c4-4a89-a171-78245fdc9ffd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.281502 4870 generic.go:334] "Generic (PLEG): container finished" podID="26978898-68c4-4a89-a171-78245fdc9ffd" containerID="32ac38a0f0bd28c09e432c637d09b5dfac7977ec4428618467db0d36f7792928" exitCode=0 Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.281632 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4zpb" Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.290178 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4zpb" event={"ID":"26978898-68c4-4a89-a171-78245fdc9ffd","Type":"ContainerDied","Data":"32ac38a0f0bd28c09e432c637d09b5dfac7977ec4428618467db0d36f7792928"} Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.290233 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4zpb" event={"ID":"26978898-68c4-4a89-a171-78245fdc9ffd","Type":"ContainerDied","Data":"c24bd72c3fcc7f41b337980825b8e3741a4c5071e5905feea70ed39dff7e5232"} Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.290259 4870 scope.go:117] "RemoveContainer" containerID="32ac38a0f0bd28c09e432c637d09b5dfac7977ec4428618467db0d36f7792928" Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.316391 4870 scope.go:117] "RemoveContainer" containerID="fc0bfb8f4143f45ffb39d84f0024abc99ee726a0962d965d93d135b5afa4e157" Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.327071 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4zpb"] Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.333301 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97cwr\" (UniqueName: \"kubernetes.io/projected/26978898-68c4-4a89-a171-78245fdc9ffd-kube-api-access-97cwr\") on node \"crc\" DevicePath \"\"" Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.333349 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26978898-68c4-4a89-a171-78245fdc9ffd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.340493 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4zpb"] Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.344632 4870 scope.go:117] "RemoveContainer" containerID="e3c2f64be0df164aeaccc6c1f1b831598a981fcb446d693aac7c1c2279e5ac61" Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.397357 4870 scope.go:117] "RemoveContainer" containerID="32ac38a0f0bd28c09e432c637d09b5dfac7977ec4428618467db0d36f7792928" Feb 16 17:56:30 crc kubenswrapper[4870]: E0216 17:56:30.397975 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32ac38a0f0bd28c09e432c637d09b5dfac7977ec4428618467db0d36f7792928\": container with ID starting with 32ac38a0f0bd28c09e432c637d09b5dfac7977ec4428618467db0d36f7792928 not found: ID does not exist" containerID="32ac38a0f0bd28c09e432c637d09b5dfac7977ec4428618467db0d36f7792928" Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.398010 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32ac38a0f0bd28c09e432c637d09b5dfac7977ec4428618467db0d36f7792928"} err="failed to get container status \"32ac38a0f0bd28c09e432c637d09b5dfac7977ec4428618467db0d36f7792928\": rpc error: code = NotFound desc = could not find container \"32ac38a0f0bd28c09e432c637d09b5dfac7977ec4428618467db0d36f7792928\": container with ID starting with 32ac38a0f0bd28c09e432c637d09b5dfac7977ec4428618467db0d36f7792928 not found: ID does not exist" Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.398038 4870 scope.go:117] "RemoveContainer" containerID="fc0bfb8f4143f45ffb39d84f0024abc99ee726a0962d965d93d135b5afa4e157" Feb 16 17:56:30 crc kubenswrapper[4870]: E0216 17:56:30.398304 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc0bfb8f4143f45ffb39d84f0024abc99ee726a0962d965d93d135b5afa4e157\": container with ID starting with fc0bfb8f4143f45ffb39d84f0024abc99ee726a0962d965d93d135b5afa4e157 not found: ID does not exist" containerID="fc0bfb8f4143f45ffb39d84f0024abc99ee726a0962d965d93d135b5afa4e157" Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.398327 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc0bfb8f4143f45ffb39d84f0024abc99ee726a0962d965d93d135b5afa4e157"} err="failed to get container status \"fc0bfb8f4143f45ffb39d84f0024abc99ee726a0962d965d93d135b5afa4e157\": rpc error: code = NotFound desc = could not find container \"fc0bfb8f4143f45ffb39d84f0024abc99ee726a0962d965d93d135b5afa4e157\": container with ID starting with fc0bfb8f4143f45ffb39d84f0024abc99ee726a0962d965d93d135b5afa4e157 not found: ID does not exist" Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.398341 4870 scope.go:117] "RemoveContainer" containerID="e3c2f64be0df164aeaccc6c1f1b831598a981fcb446d693aac7c1c2279e5ac61" Feb 16 17:56:30 crc kubenswrapper[4870]: E0216 17:56:30.398691 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3c2f64be0df164aeaccc6c1f1b831598a981fcb446d693aac7c1c2279e5ac61\": container with ID starting with e3c2f64be0df164aeaccc6c1f1b831598a981fcb446d693aac7c1c2279e5ac61 not found: ID does not exist" containerID="e3c2f64be0df164aeaccc6c1f1b831598a981fcb446d693aac7c1c2279e5ac61" Feb 16 17:56:30 crc kubenswrapper[4870]: I0216 17:56:30.398718 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3c2f64be0df164aeaccc6c1f1b831598a981fcb446d693aac7c1c2279e5ac61"} err="failed to get container status \"e3c2f64be0df164aeaccc6c1f1b831598a981fcb446d693aac7c1c2279e5ac61\": rpc error: code = NotFound desc = could not find container \"e3c2f64be0df164aeaccc6c1f1b831598a981fcb446d693aac7c1c2279e5ac61\": container with ID starting with e3c2f64be0df164aeaccc6c1f1b831598a981fcb446d693aac7c1c2279e5ac61 not found: ID does not exist" Feb 16 17:56:32 crc kubenswrapper[4870]: I0216 17:56:32.234083 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26978898-68c4-4a89-a171-78245fdc9ffd" path="/var/lib/kubelet/pods/26978898-68c4-4a89-a171-78245fdc9ffd/volumes" Feb 16 17:56:35 crc kubenswrapper[4870]: I0216 17:56:35.367232 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:56:35 crc kubenswrapper[4870]: I0216 17:56:35.367551 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:56:35 crc kubenswrapper[4870]: I0216 17:56:35.367612 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:56:35 crc kubenswrapper[4870]: I0216 17:56:35.368534 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fad609847562591995d01b404761ef2f912723b7c6df5a3a9c2782bc74728b1f"} pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:56:35 crc kubenswrapper[4870]: I0216 17:56:35.368582 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" containerID="cri-o://fad609847562591995d01b404761ef2f912723b7c6df5a3a9c2782bc74728b1f" gracePeriod=600 Feb 16 17:56:36 crc kubenswrapper[4870]: I0216 17:56:36.335232 4870 generic.go:334] "Generic (PLEG): container finished" podID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerID="fad609847562591995d01b404761ef2f912723b7c6df5a3a9c2782bc74728b1f" exitCode=0 Feb 16 17:56:36 crc kubenswrapper[4870]: I0216 17:56:36.335313 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerDied","Data":"fad609847562591995d01b404761ef2f912723b7c6df5a3a9c2782bc74728b1f"} Feb 16 17:56:36 crc kubenswrapper[4870]: I0216 17:56:36.335762 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerStarted","Data":"31bbd1163ce6eca4377493bd0164a78761deddd73e2a60c81e561bd8ea435ba6"} Feb 16 17:56:36 crc kubenswrapper[4870]: I0216 17:56:36.335785 4870 scope.go:117] "RemoveContainer" containerID="7bb81019315dd861e62271d76de69cac13a6573dacab2bd37177a42ccbfc3053" Feb 16 17:56:37 crc kubenswrapper[4870]: I0216 17:56:37.196483 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f74xs" podUID="cde30d76-1a57-4db4-ab1e-08e0d98b5abe" containerName="registry-server" probeResult="failure" output=< Feb 16 17:56:37 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Feb 16 17:56:37 crc kubenswrapper[4870]: > Feb 16 17:56:38 crc kubenswrapper[4870]: I0216 17:56:38.230746 4870 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:56:38 crc kubenswrapper[4870]: E0216 17:56:38.325023 4870 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:56:38 crc kubenswrapper[4870]: E0216 17:56:38.325074 4870 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 17:56:38 crc kubenswrapper[4870]: E0216 17:56:38.325210 4870 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7b2p7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6hkdm_openstack(34a86750-1fff-4add-8462-7ab805ec7f89): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 17:56:38 crc kubenswrapper[4870]: E0216 17:56:38.326759 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:56:38 crc kubenswrapper[4870]: I0216 17:56:38.565307 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xg29l" Feb 16 17:56:38 crc kubenswrapper[4870]: I0216 17:56:38.837075 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xg29l"] Feb 16 17:56:39 crc kubenswrapper[4870]: I0216 17:56:39.363241 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xg29l" podUID="88b85e09-9bfa-42a0-84ec-06f45a67a5f5" containerName="registry-server" containerID="cri-o://94a12a194a2a157629bda68c5b18b0c1f5342a47c376ad33bbf5fc25bb86b795" gracePeriod=2 Feb 16 17:56:39 crc kubenswrapper[4870]: I0216 17:56:39.932188 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xg29l" Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.026644 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88b85e09-9bfa-42a0-84ec-06f45a67a5f5-utilities\") pod \"88b85e09-9bfa-42a0-84ec-06f45a67a5f5\" (UID: \"88b85e09-9bfa-42a0-84ec-06f45a67a5f5\") " Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.026739 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88b85e09-9bfa-42a0-84ec-06f45a67a5f5-catalog-content\") pod \"88b85e09-9bfa-42a0-84ec-06f45a67a5f5\" (UID: \"88b85e09-9bfa-42a0-84ec-06f45a67a5f5\") " Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.026780 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdms5\" (UniqueName: \"kubernetes.io/projected/88b85e09-9bfa-42a0-84ec-06f45a67a5f5-kube-api-access-fdms5\") pod \"88b85e09-9bfa-42a0-84ec-06f45a67a5f5\" (UID: \"88b85e09-9bfa-42a0-84ec-06f45a67a5f5\") " Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.027492 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88b85e09-9bfa-42a0-84ec-06f45a67a5f5-utilities" (OuterVolumeSpecName: "utilities") pod "88b85e09-9bfa-42a0-84ec-06f45a67a5f5" (UID: "88b85e09-9bfa-42a0-84ec-06f45a67a5f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.041148 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88b85e09-9bfa-42a0-84ec-06f45a67a5f5-kube-api-access-fdms5" (OuterVolumeSpecName: "kube-api-access-fdms5") pod "88b85e09-9bfa-42a0-84ec-06f45a67a5f5" (UID: "88b85e09-9bfa-42a0-84ec-06f45a67a5f5"). InnerVolumeSpecName "kube-api-access-fdms5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.095184 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88b85e09-9bfa-42a0-84ec-06f45a67a5f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88b85e09-9bfa-42a0-84ec-06f45a67a5f5" (UID: "88b85e09-9bfa-42a0-84ec-06f45a67a5f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.129406 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88b85e09-9bfa-42a0-84ec-06f45a67a5f5-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.129441 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88b85e09-9bfa-42a0-84ec-06f45a67a5f5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.129453 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdms5\" (UniqueName: \"kubernetes.io/projected/88b85e09-9bfa-42a0-84ec-06f45a67a5f5-kube-api-access-fdms5\") on node \"crc\" DevicePath \"\"" Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.383898 4870 generic.go:334] "Generic (PLEG): container finished" podID="88b85e09-9bfa-42a0-84ec-06f45a67a5f5" containerID="94a12a194a2a157629bda68c5b18b0c1f5342a47c376ad33bbf5fc25bb86b795" exitCode=0 Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.383970 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xg29l" event={"ID":"88b85e09-9bfa-42a0-84ec-06f45a67a5f5","Type":"ContainerDied","Data":"94a12a194a2a157629bda68c5b18b0c1f5342a47c376ad33bbf5fc25bb86b795"} Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.384005 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xg29l" event={"ID":"88b85e09-9bfa-42a0-84ec-06f45a67a5f5","Type":"ContainerDied","Data":"d61b43057f2fcf0b5fc3608ffaec623924ec5baa1af4b74987b68e0d7dfd3302"} Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.384040 4870 scope.go:117] "RemoveContainer" containerID="94a12a194a2a157629bda68c5b18b0c1f5342a47c376ad33bbf5fc25bb86b795" Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.384272 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xg29l" Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.417257 4870 scope.go:117] "RemoveContainer" containerID="9633f56a7509845bcbdf5a9670172257bd54233625c11ccc59910219d1f6c9dd" Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.438340 4870 scope.go:117] "RemoveContainer" containerID="0cba3f35a9ac92b95ea9778542da667f6f0a7bbfa8d7f5c457bc0d12492ae91f" Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.439995 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xg29l"] Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.459461 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xg29l"] Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.494932 4870 scope.go:117] "RemoveContainer" containerID="94a12a194a2a157629bda68c5b18b0c1f5342a47c376ad33bbf5fc25bb86b795" Feb 16 17:56:40 crc kubenswrapper[4870]: E0216 17:56:40.495423 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94a12a194a2a157629bda68c5b18b0c1f5342a47c376ad33bbf5fc25bb86b795\": container with ID starting with 94a12a194a2a157629bda68c5b18b0c1f5342a47c376ad33bbf5fc25bb86b795 not found: ID does not exist" containerID="94a12a194a2a157629bda68c5b18b0c1f5342a47c376ad33bbf5fc25bb86b795" Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.495455 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94a12a194a2a157629bda68c5b18b0c1f5342a47c376ad33bbf5fc25bb86b795"} err="failed to get container status \"94a12a194a2a157629bda68c5b18b0c1f5342a47c376ad33bbf5fc25bb86b795\": rpc error: code = NotFound desc = could not find container \"94a12a194a2a157629bda68c5b18b0c1f5342a47c376ad33bbf5fc25bb86b795\": container with ID starting with 94a12a194a2a157629bda68c5b18b0c1f5342a47c376ad33bbf5fc25bb86b795 not found: ID does not exist" Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.495476 4870 scope.go:117] "RemoveContainer" containerID="9633f56a7509845bcbdf5a9670172257bd54233625c11ccc59910219d1f6c9dd" Feb 16 17:56:40 crc kubenswrapper[4870]: E0216 17:56:40.496108 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9633f56a7509845bcbdf5a9670172257bd54233625c11ccc59910219d1f6c9dd\": container with ID starting with 9633f56a7509845bcbdf5a9670172257bd54233625c11ccc59910219d1f6c9dd not found: ID does not exist" containerID="9633f56a7509845bcbdf5a9670172257bd54233625c11ccc59910219d1f6c9dd" Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.496130 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9633f56a7509845bcbdf5a9670172257bd54233625c11ccc59910219d1f6c9dd"} err="failed to get container status \"9633f56a7509845bcbdf5a9670172257bd54233625c11ccc59910219d1f6c9dd\": rpc error: code = NotFound desc = could not find container \"9633f56a7509845bcbdf5a9670172257bd54233625c11ccc59910219d1f6c9dd\": container with ID starting with 9633f56a7509845bcbdf5a9670172257bd54233625c11ccc59910219d1f6c9dd not found: ID does not exist" Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.496142 4870 scope.go:117] "RemoveContainer" containerID="0cba3f35a9ac92b95ea9778542da667f6f0a7bbfa8d7f5c457bc0d12492ae91f" Feb 16 17:56:40 crc kubenswrapper[4870]: E0216 17:56:40.496362 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cba3f35a9ac92b95ea9778542da667f6f0a7bbfa8d7f5c457bc0d12492ae91f\": container with ID starting with 0cba3f35a9ac92b95ea9778542da667f6f0a7bbfa8d7f5c457bc0d12492ae91f not found: ID does not exist" containerID="0cba3f35a9ac92b95ea9778542da667f6f0a7bbfa8d7f5c457bc0d12492ae91f" Feb 16 17:56:40 crc kubenswrapper[4870]: I0216 17:56:40.496381 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cba3f35a9ac92b95ea9778542da667f6f0a7bbfa8d7f5c457bc0d12492ae91f"} err="failed to get container status \"0cba3f35a9ac92b95ea9778542da667f6f0a7bbfa8d7f5c457bc0d12492ae91f\": rpc error: code = NotFound desc = could not find container \"0cba3f35a9ac92b95ea9778542da667f6f0a7bbfa8d7f5c457bc0d12492ae91f\": container with ID starting with 0cba3f35a9ac92b95ea9778542da667f6f0a7bbfa8d7f5c457bc0d12492ae91f not found: ID does not exist" Feb 16 17:56:42 crc kubenswrapper[4870]: I0216 17:56:42.238536 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88b85e09-9bfa-42a0-84ec-06f45a67a5f5" path="/var/lib/kubelet/pods/88b85e09-9bfa-42a0-84ec-06f45a67a5f5/volumes" Feb 16 17:56:46 crc kubenswrapper[4870]: I0216 17:56:46.193007 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f74xs" Feb 16 17:56:46 crc kubenswrapper[4870]: I0216 17:56:46.252203 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f74xs" Feb 16 17:56:47 crc kubenswrapper[4870]: I0216 17:56:47.000328 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f74xs"] Feb 16 17:56:47 crc kubenswrapper[4870]: I0216 17:56:47.446296 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f74xs" podUID="cde30d76-1a57-4db4-ab1e-08e0d98b5abe" containerName="registry-server" containerID="cri-o://db42a0c86f60495b39d8290092191fdcff4184ed150e038355572dda481dcb97" gracePeriod=2 Feb 16 17:56:47 crc kubenswrapper[4870]: I0216 17:56:47.956745 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f74xs" Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.096758 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dzww\" (UniqueName: \"kubernetes.io/projected/cde30d76-1a57-4db4-ab1e-08e0d98b5abe-kube-api-access-7dzww\") pod \"cde30d76-1a57-4db4-ab1e-08e0d98b5abe\" (UID: \"cde30d76-1a57-4db4-ab1e-08e0d98b5abe\") " Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.096872 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde30d76-1a57-4db4-ab1e-08e0d98b5abe-utilities\") pod \"cde30d76-1a57-4db4-ab1e-08e0d98b5abe\" (UID: \"cde30d76-1a57-4db4-ab1e-08e0d98b5abe\") " Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.096924 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde30d76-1a57-4db4-ab1e-08e0d98b5abe-catalog-content\") pod \"cde30d76-1a57-4db4-ab1e-08e0d98b5abe\" (UID: \"cde30d76-1a57-4db4-ab1e-08e0d98b5abe\") " Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.097479 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cde30d76-1a57-4db4-ab1e-08e0d98b5abe-utilities" (OuterVolumeSpecName: "utilities") pod "cde30d76-1a57-4db4-ab1e-08e0d98b5abe" (UID: "cde30d76-1a57-4db4-ab1e-08e0d98b5abe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.104523 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cde30d76-1a57-4db4-ab1e-08e0d98b5abe-kube-api-access-7dzww" (OuterVolumeSpecName: "kube-api-access-7dzww") pod "cde30d76-1a57-4db4-ab1e-08e0d98b5abe" (UID: "cde30d76-1a57-4db4-ab1e-08e0d98b5abe"). InnerVolumeSpecName "kube-api-access-7dzww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.199463 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dzww\" (UniqueName: \"kubernetes.io/projected/cde30d76-1a57-4db4-ab1e-08e0d98b5abe-kube-api-access-7dzww\") on node \"crc\" DevicePath \"\"" Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.199678 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde30d76-1a57-4db4-ab1e-08e0d98b5abe-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.217861 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cde30d76-1a57-4db4-ab1e-08e0d98b5abe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cde30d76-1a57-4db4-ab1e-08e0d98b5abe" (UID: "cde30d76-1a57-4db4-ab1e-08e0d98b5abe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.301149 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde30d76-1a57-4db4-ab1e-08e0d98b5abe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.462005 4870 generic.go:334] "Generic (PLEG): container finished" podID="cde30d76-1a57-4db4-ab1e-08e0d98b5abe" containerID="db42a0c86f60495b39d8290092191fdcff4184ed150e038355572dda481dcb97" exitCode=0 Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.462067 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f74xs" event={"ID":"cde30d76-1a57-4db4-ab1e-08e0d98b5abe","Type":"ContainerDied","Data":"db42a0c86f60495b39d8290092191fdcff4184ed150e038355572dda481dcb97"} Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.462119 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f74xs" Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.462133 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f74xs" event={"ID":"cde30d76-1a57-4db4-ab1e-08e0d98b5abe","Type":"ContainerDied","Data":"027f7b77f2a923400e552ce4df4e52eade26ad1d95e3b987153b2f0ee19a3c55"} Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.462190 4870 scope.go:117] "RemoveContainer" containerID="db42a0c86f60495b39d8290092191fdcff4184ed150e038355572dda481dcb97" Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.495851 4870 scope.go:117] "RemoveContainer" containerID="596673e5cd9a01f46548e7e1709c425250a6e74291ac1a37402b3f03ee070313" Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.498520 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f74xs"] Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.509847 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f74xs"] Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.524548 4870 scope.go:117] "RemoveContainer" containerID="0aa6a31d0c25740c4f72465130a7a566558abc564f1fa8f57478e1d74fedcd5f" Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.573320 4870 scope.go:117] "RemoveContainer" containerID="db42a0c86f60495b39d8290092191fdcff4184ed150e038355572dda481dcb97" Feb 16 17:56:48 crc kubenswrapper[4870]: E0216 17:56:48.573930 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db42a0c86f60495b39d8290092191fdcff4184ed150e038355572dda481dcb97\": container with ID starting with db42a0c86f60495b39d8290092191fdcff4184ed150e038355572dda481dcb97 not found: ID does not exist" containerID="db42a0c86f60495b39d8290092191fdcff4184ed150e038355572dda481dcb97" Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.574005 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db42a0c86f60495b39d8290092191fdcff4184ed150e038355572dda481dcb97"} err="failed to get container status \"db42a0c86f60495b39d8290092191fdcff4184ed150e038355572dda481dcb97\": rpc error: code = NotFound desc = could not find container \"db42a0c86f60495b39d8290092191fdcff4184ed150e038355572dda481dcb97\": container with ID starting with db42a0c86f60495b39d8290092191fdcff4184ed150e038355572dda481dcb97 not found: ID does not exist" Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.574030 4870 scope.go:117] "RemoveContainer" containerID="596673e5cd9a01f46548e7e1709c425250a6e74291ac1a37402b3f03ee070313" Feb 16 17:56:48 crc kubenswrapper[4870]: E0216 17:56:48.574364 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"596673e5cd9a01f46548e7e1709c425250a6e74291ac1a37402b3f03ee070313\": container with ID starting with 596673e5cd9a01f46548e7e1709c425250a6e74291ac1a37402b3f03ee070313 not found: ID does not exist" containerID="596673e5cd9a01f46548e7e1709c425250a6e74291ac1a37402b3f03ee070313" Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.574408 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"596673e5cd9a01f46548e7e1709c425250a6e74291ac1a37402b3f03ee070313"} err="failed to get container status \"596673e5cd9a01f46548e7e1709c425250a6e74291ac1a37402b3f03ee070313\": rpc error: code = NotFound desc = could not find container \"596673e5cd9a01f46548e7e1709c425250a6e74291ac1a37402b3f03ee070313\": container with ID starting with 596673e5cd9a01f46548e7e1709c425250a6e74291ac1a37402b3f03ee070313 not found: ID does not exist" Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.574422 4870 scope.go:117] "RemoveContainer" containerID="0aa6a31d0c25740c4f72465130a7a566558abc564f1fa8f57478e1d74fedcd5f" Feb 16 17:56:48 crc kubenswrapper[4870]: E0216 17:56:48.574738 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aa6a31d0c25740c4f72465130a7a566558abc564f1fa8f57478e1d74fedcd5f\": container with ID starting with 0aa6a31d0c25740c4f72465130a7a566558abc564f1fa8f57478e1d74fedcd5f not found: ID does not exist" containerID="0aa6a31d0c25740c4f72465130a7a566558abc564f1fa8f57478e1d74fedcd5f" Feb 16 17:56:48 crc kubenswrapper[4870]: I0216 17:56:48.574776 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aa6a31d0c25740c4f72465130a7a566558abc564f1fa8f57478e1d74fedcd5f"} err="failed to get container status \"0aa6a31d0c25740c4f72465130a7a566558abc564f1fa8f57478e1d74fedcd5f\": rpc error: code = NotFound desc = could not find container \"0aa6a31d0c25740c4f72465130a7a566558abc564f1fa8f57478e1d74fedcd5f\": container with ID starting with 0aa6a31d0c25740c4f72465130a7a566558abc564f1fa8f57478e1d74fedcd5f not found: ID does not exist" Feb 16 17:56:50 crc kubenswrapper[4870]: E0216 17:56:50.225128 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:56:50 crc kubenswrapper[4870]: I0216 17:56:50.241143 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cde30d76-1a57-4db4-ab1e-08e0d98b5abe" path="/var/lib/kubelet/pods/cde30d76-1a57-4db4-ab1e-08e0d98b5abe/volumes" Feb 16 17:57:04 crc kubenswrapper[4870]: E0216 17:57:04.225273 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:57:05 crc kubenswrapper[4870]: I0216 17:57:05.017454 4870 scope.go:117] "RemoveContainer" containerID="086a5db785aae46ca55d534a05407b478706d1d8c1667f191c3ebe4c997c9d60" Feb 16 17:57:15 crc kubenswrapper[4870]: E0216 17:57:15.228628 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:57:26 crc kubenswrapper[4870]: E0216 17:57:26.233377 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:57:37 crc kubenswrapper[4870]: E0216 17:57:37.225449 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:57:51 crc kubenswrapper[4870]: E0216 17:57:51.225336 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:58:05 crc kubenswrapper[4870]: E0216 17:58:05.223962 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:58:18 crc kubenswrapper[4870]: E0216 17:58:18.232628 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:58:30 crc kubenswrapper[4870]: E0216 17:58:30.224755 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.571270 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z6z9w"] Feb 16 17:58:30 crc kubenswrapper[4870]: E0216 17:58:30.571706 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26978898-68c4-4a89-a171-78245fdc9ffd" containerName="extract-content" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.571719 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="26978898-68c4-4a89-a171-78245fdc9ffd" containerName="extract-content" Feb 16 17:58:30 crc kubenswrapper[4870]: E0216 17:58:30.571731 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26978898-68c4-4a89-a171-78245fdc9ffd" containerName="extract-utilities" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.571737 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="26978898-68c4-4a89-a171-78245fdc9ffd" containerName="extract-utilities" Feb 16 17:58:30 crc kubenswrapper[4870]: E0216 17:58:30.571752 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26978898-68c4-4a89-a171-78245fdc9ffd" containerName="registry-server" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.571757 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="26978898-68c4-4a89-a171-78245fdc9ffd" containerName="registry-server" Feb 16 17:58:30 crc kubenswrapper[4870]: E0216 17:58:30.571775 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde30d76-1a57-4db4-ab1e-08e0d98b5abe" containerName="extract-utilities" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.571781 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde30d76-1a57-4db4-ab1e-08e0d98b5abe" containerName="extract-utilities" Feb 16 17:58:30 crc kubenswrapper[4870]: E0216 17:58:30.571793 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde30d76-1a57-4db4-ab1e-08e0d98b5abe" containerName="registry-server" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.571799 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde30d76-1a57-4db4-ab1e-08e0d98b5abe" containerName="registry-server" Feb 16 17:58:30 crc kubenswrapper[4870]: E0216 17:58:30.571805 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b85e09-9bfa-42a0-84ec-06f45a67a5f5" containerName="extract-content" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.571810 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b85e09-9bfa-42a0-84ec-06f45a67a5f5" containerName="extract-content" Feb 16 17:58:30 crc kubenswrapper[4870]: E0216 17:58:30.571828 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b85e09-9bfa-42a0-84ec-06f45a67a5f5" containerName="extract-utilities" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.571833 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b85e09-9bfa-42a0-84ec-06f45a67a5f5" containerName="extract-utilities" Feb 16 17:58:30 crc kubenswrapper[4870]: E0216 17:58:30.571845 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde30d76-1a57-4db4-ab1e-08e0d98b5abe" containerName="extract-content" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.571852 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde30d76-1a57-4db4-ab1e-08e0d98b5abe" containerName="extract-content" Feb 16 17:58:30 crc kubenswrapper[4870]: E0216 17:58:30.571860 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b85e09-9bfa-42a0-84ec-06f45a67a5f5" containerName="registry-server" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.571868 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b85e09-9bfa-42a0-84ec-06f45a67a5f5" containerName="registry-server" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.572121 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="cde30d76-1a57-4db4-ab1e-08e0d98b5abe" containerName="registry-server" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.572147 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="88b85e09-9bfa-42a0-84ec-06f45a67a5f5" containerName="registry-server" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.572159 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="26978898-68c4-4a89-a171-78245fdc9ffd" containerName="registry-server" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.573684 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6z9w" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.580993 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z6z9w"] Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.702918 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1efeea5-ad40-4587-be94-bd8da7d9fa85-utilities\") pod \"certified-operators-z6z9w\" (UID: \"b1efeea5-ad40-4587-be94-bd8da7d9fa85\") " pod="openshift-marketplace/certified-operators-z6z9w" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.703404 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-982sp\" (UniqueName: \"kubernetes.io/projected/b1efeea5-ad40-4587-be94-bd8da7d9fa85-kube-api-access-982sp\") pod \"certified-operators-z6z9w\" (UID: \"b1efeea5-ad40-4587-be94-bd8da7d9fa85\") " pod="openshift-marketplace/certified-operators-z6z9w" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.703589 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1efeea5-ad40-4587-be94-bd8da7d9fa85-catalog-content\") pod \"certified-operators-z6z9w\" (UID: \"b1efeea5-ad40-4587-be94-bd8da7d9fa85\") " pod="openshift-marketplace/certified-operators-z6z9w" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.805609 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-982sp\" (UniqueName: \"kubernetes.io/projected/b1efeea5-ad40-4587-be94-bd8da7d9fa85-kube-api-access-982sp\") pod \"certified-operators-z6z9w\" (UID: \"b1efeea5-ad40-4587-be94-bd8da7d9fa85\") " pod="openshift-marketplace/certified-operators-z6z9w" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.805702 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1efeea5-ad40-4587-be94-bd8da7d9fa85-catalog-content\") pod \"certified-operators-z6z9w\" (UID: \"b1efeea5-ad40-4587-be94-bd8da7d9fa85\") " pod="openshift-marketplace/certified-operators-z6z9w" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.805801 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1efeea5-ad40-4587-be94-bd8da7d9fa85-utilities\") pod \"certified-operators-z6z9w\" (UID: \"b1efeea5-ad40-4587-be94-bd8da7d9fa85\") " pod="openshift-marketplace/certified-operators-z6z9w" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.806535 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1efeea5-ad40-4587-be94-bd8da7d9fa85-utilities\") pod \"certified-operators-z6z9w\" (UID: \"b1efeea5-ad40-4587-be94-bd8da7d9fa85\") " pod="openshift-marketplace/certified-operators-z6z9w" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.806561 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1efeea5-ad40-4587-be94-bd8da7d9fa85-catalog-content\") pod \"certified-operators-z6z9w\" (UID: \"b1efeea5-ad40-4587-be94-bd8da7d9fa85\") " pod="openshift-marketplace/certified-operators-z6z9w" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.829805 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-982sp\" (UniqueName: \"kubernetes.io/projected/b1efeea5-ad40-4587-be94-bd8da7d9fa85-kube-api-access-982sp\") pod \"certified-operators-z6z9w\" (UID: \"b1efeea5-ad40-4587-be94-bd8da7d9fa85\") " pod="openshift-marketplace/certified-operators-z6z9w" Feb 16 17:58:30 crc kubenswrapper[4870]: I0216 17:58:30.890503 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6z9w" Feb 16 17:58:31 crc kubenswrapper[4870]: I0216 17:58:31.459739 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z6z9w"] Feb 16 17:58:32 crc kubenswrapper[4870]: I0216 17:58:32.017168 4870 generic.go:334] "Generic (PLEG): container finished" podID="b1efeea5-ad40-4587-be94-bd8da7d9fa85" containerID="5a2e0609a27a6b11567a7c0fb40357b4d404edc3dc1bded3770ff9115bf32e56" exitCode=0 Feb 16 17:58:32 crc kubenswrapper[4870]: I0216 17:58:32.017594 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6z9w" event={"ID":"b1efeea5-ad40-4587-be94-bd8da7d9fa85","Type":"ContainerDied","Data":"5a2e0609a27a6b11567a7c0fb40357b4d404edc3dc1bded3770ff9115bf32e56"} Feb 16 17:58:32 crc kubenswrapper[4870]: I0216 17:58:32.017723 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6z9w" event={"ID":"b1efeea5-ad40-4587-be94-bd8da7d9fa85","Type":"ContainerStarted","Data":"6516e10b919ad1128e0f0c973a490192b86039498fe31e0142f575c12e945cce"} Feb 16 17:58:35 crc kubenswrapper[4870]: I0216 17:58:35.367218 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:58:35 crc kubenswrapper[4870]: I0216 17:58:35.368079 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:58:37 crc kubenswrapper[4870]: I0216 17:58:37.096388 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6z9w" event={"ID":"b1efeea5-ad40-4587-be94-bd8da7d9fa85","Type":"ContainerStarted","Data":"5237358b2ce67d84d86ba2e8b9ab0a8f6d7b53ec88071f5b1b5e800c6d1a7110"} Feb 16 17:58:39 crc kubenswrapper[4870]: I0216 17:58:39.114337 4870 generic.go:334] "Generic (PLEG): container finished" podID="b1efeea5-ad40-4587-be94-bd8da7d9fa85" containerID="5237358b2ce67d84d86ba2e8b9ab0a8f6d7b53ec88071f5b1b5e800c6d1a7110" exitCode=0 Feb 16 17:58:39 crc kubenswrapper[4870]: I0216 17:58:39.114417 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6z9w" event={"ID":"b1efeea5-ad40-4587-be94-bd8da7d9fa85","Type":"ContainerDied","Data":"5237358b2ce67d84d86ba2e8b9ab0a8f6d7b53ec88071f5b1b5e800c6d1a7110"} Feb 16 17:58:40 crc kubenswrapper[4870]: I0216 17:58:40.130571 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6z9w" event={"ID":"b1efeea5-ad40-4587-be94-bd8da7d9fa85","Type":"ContainerStarted","Data":"aa52994f82f8d21fd916b91f252a67aa13f64b344168ab4786adceb99ed602d0"} Feb 16 17:58:40 crc kubenswrapper[4870]: I0216 17:58:40.154205 4870 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z6z9w" podStartSLOduration=2.637630959 podStartE2EDuration="10.154185055s" podCreationTimestamp="2026-02-16 17:58:30 +0000 UTC" firstStartedPulling="2026-02-16 17:58:32.018735352 +0000 UTC m=+3516.502199736" lastFinishedPulling="2026-02-16 17:58:39.535289448 +0000 UTC m=+3524.018753832" observedRunningTime="2026-02-16 17:58:40.148776241 +0000 UTC m=+3524.632240675" watchObservedRunningTime="2026-02-16 17:58:40.154185055 +0000 UTC m=+3524.637649439" Feb 16 17:58:40 crc kubenswrapper[4870]: I0216 17:58:40.891710 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z6z9w" Feb 16 17:58:40 crc kubenswrapper[4870]: I0216 17:58:40.892078 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z6z9w" Feb 16 17:58:41 crc kubenswrapper[4870]: I0216 17:58:41.947595 4870 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-z6z9w" podUID="b1efeea5-ad40-4587-be94-bd8da7d9fa85" containerName="registry-server" probeResult="failure" output=< Feb 16 17:58:41 crc kubenswrapper[4870]: timeout: failed to connect service ":50051" within 1s Feb 16 17:58:41 crc kubenswrapper[4870]: > Feb 16 17:58:42 crc kubenswrapper[4870]: E0216 17:58:42.224893 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:58:50 crc kubenswrapper[4870]: I0216 17:58:50.972587 4870 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z6z9w" Feb 16 17:58:51 crc kubenswrapper[4870]: I0216 17:58:51.045679 4870 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z6z9w" Feb 16 17:58:51 crc kubenswrapper[4870]: I0216 17:58:51.228542 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z6z9w"] Feb 16 17:58:52 crc kubenswrapper[4870]: I0216 17:58:52.260567 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z6z9w" podUID="b1efeea5-ad40-4587-be94-bd8da7d9fa85" containerName="registry-server" containerID="cri-o://aa52994f82f8d21fd916b91f252a67aa13f64b344168ab4786adceb99ed602d0" gracePeriod=2 Feb 16 17:58:52 crc kubenswrapper[4870]: I0216 17:58:52.834685 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6z9w" Feb 16 17:58:52 crc kubenswrapper[4870]: I0216 17:58:52.980259 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1efeea5-ad40-4587-be94-bd8da7d9fa85-catalog-content\") pod \"b1efeea5-ad40-4587-be94-bd8da7d9fa85\" (UID: \"b1efeea5-ad40-4587-be94-bd8da7d9fa85\") " Feb 16 17:58:52 crc kubenswrapper[4870]: I0216 17:58:52.980398 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-982sp\" (UniqueName: \"kubernetes.io/projected/b1efeea5-ad40-4587-be94-bd8da7d9fa85-kube-api-access-982sp\") pod \"b1efeea5-ad40-4587-be94-bd8da7d9fa85\" (UID: \"b1efeea5-ad40-4587-be94-bd8da7d9fa85\") " Feb 16 17:58:52 crc kubenswrapper[4870]: I0216 17:58:52.980436 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1efeea5-ad40-4587-be94-bd8da7d9fa85-utilities\") pod \"b1efeea5-ad40-4587-be94-bd8da7d9fa85\" (UID: \"b1efeea5-ad40-4587-be94-bd8da7d9fa85\") " Feb 16 17:58:52 crc kubenswrapper[4870]: I0216 17:58:52.981586 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1efeea5-ad40-4587-be94-bd8da7d9fa85-utilities" (OuterVolumeSpecName: "utilities") pod "b1efeea5-ad40-4587-be94-bd8da7d9fa85" (UID: "b1efeea5-ad40-4587-be94-bd8da7d9fa85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:58:52 crc kubenswrapper[4870]: I0216 17:58:52.988195 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1efeea5-ad40-4587-be94-bd8da7d9fa85-kube-api-access-982sp" (OuterVolumeSpecName: "kube-api-access-982sp") pod "b1efeea5-ad40-4587-be94-bd8da7d9fa85" (UID: "b1efeea5-ad40-4587-be94-bd8da7d9fa85"). InnerVolumeSpecName "kube-api-access-982sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:58:53 crc kubenswrapper[4870]: I0216 17:58:53.027143 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1efeea5-ad40-4587-be94-bd8da7d9fa85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1efeea5-ad40-4587-be94-bd8da7d9fa85" (UID: "b1efeea5-ad40-4587-be94-bd8da7d9fa85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:58:53 crc kubenswrapper[4870]: I0216 17:58:53.082190 4870 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1efeea5-ad40-4587-be94-bd8da7d9fa85-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 17:58:53 crc kubenswrapper[4870]: I0216 17:58:53.082224 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-982sp\" (UniqueName: \"kubernetes.io/projected/b1efeea5-ad40-4587-be94-bd8da7d9fa85-kube-api-access-982sp\") on node \"crc\" DevicePath \"\"" Feb 16 17:58:53 crc kubenswrapper[4870]: I0216 17:58:53.082240 4870 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1efeea5-ad40-4587-be94-bd8da7d9fa85-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 17:58:53 crc kubenswrapper[4870]: I0216 17:58:53.271483 4870 generic.go:334] "Generic (PLEG): container finished" podID="b1efeea5-ad40-4587-be94-bd8da7d9fa85" containerID="aa52994f82f8d21fd916b91f252a67aa13f64b344168ab4786adceb99ed602d0" exitCode=0 Feb 16 17:58:53 crc kubenswrapper[4870]: I0216 17:58:53.271527 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6z9w" event={"ID":"b1efeea5-ad40-4587-be94-bd8da7d9fa85","Type":"ContainerDied","Data":"aa52994f82f8d21fd916b91f252a67aa13f64b344168ab4786adceb99ed602d0"} Feb 16 17:58:53 crc kubenswrapper[4870]: I0216 17:58:53.271565 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6z9w" event={"ID":"b1efeea5-ad40-4587-be94-bd8da7d9fa85","Type":"ContainerDied","Data":"6516e10b919ad1128e0f0c973a490192b86039498fe31e0142f575c12e945cce"} Feb 16 17:58:53 crc kubenswrapper[4870]: I0216 17:58:53.271586 4870 scope.go:117] "RemoveContainer" containerID="aa52994f82f8d21fd916b91f252a67aa13f64b344168ab4786adceb99ed602d0" Feb 16 17:58:53 crc kubenswrapper[4870]: I0216 17:58:53.271592 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6z9w" Feb 16 17:58:53 crc kubenswrapper[4870]: I0216 17:58:53.290821 4870 scope.go:117] "RemoveContainer" containerID="5237358b2ce67d84d86ba2e8b9ab0a8f6d7b53ec88071f5b1b5e800c6d1a7110" Feb 16 17:58:53 crc kubenswrapper[4870]: I0216 17:58:53.329791 4870 scope.go:117] "RemoveContainer" containerID="5a2e0609a27a6b11567a7c0fb40357b4d404edc3dc1bded3770ff9115bf32e56" Feb 16 17:58:53 crc kubenswrapper[4870]: I0216 17:58:53.397676 4870 scope.go:117] "RemoveContainer" containerID="aa52994f82f8d21fd916b91f252a67aa13f64b344168ab4786adceb99ed602d0" Feb 16 17:58:53 crc kubenswrapper[4870]: E0216 17:58:53.398671 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa52994f82f8d21fd916b91f252a67aa13f64b344168ab4786adceb99ed602d0\": container with ID starting with aa52994f82f8d21fd916b91f252a67aa13f64b344168ab4786adceb99ed602d0 not found: ID does not exist" containerID="aa52994f82f8d21fd916b91f252a67aa13f64b344168ab4786adceb99ed602d0" Feb 16 17:58:53 crc kubenswrapper[4870]: I0216 17:58:53.398729 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa52994f82f8d21fd916b91f252a67aa13f64b344168ab4786adceb99ed602d0"} err="failed to get container status \"aa52994f82f8d21fd916b91f252a67aa13f64b344168ab4786adceb99ed602d0\": rpc error: code = NotFound desc = could not find container \"aa52994f82f8d21fd916b91f252a67aa13f64b344168ab4786adceb99ed602d0\": container with ID starting with aa52994f82f8d21fd916b91f252a67aa13f64b344168ab4786adceb99ed602d0 not found: ID does not exist" Feb 16 17:58:53 crc kubenswrapper[4870]: I0216 17:58:53.398763 4870 scope.go:117] "RemoveContainer" containerID="5237358b2ce67d84d86ba2e8b9ab0a8f6d7b53ec88071f5b1b5e800c6d1a7110" Feb 16 17:58:53 crc kubenswrapper[4870]: E0216 17:58:53.399301 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5237358b2ce67d84d86ba2e8b9ab0a8f6d7b53ec88071f5b1b5e800c6d1a7110\": container with ID starting with 5237358b2ce67d84d86ba2e8b9ab0a8f6d7b53ec88071f5b1b5e800c6d1a7110 not found: ID does not exist" containerID="5237358b2ce67d84d86ba2e8b9ab0a8f6d7b53ec88071f5b1b5e800c6d1a7110" Feb 16 17:58:53 crc kubenswrapper[4870]: I0216 17:58:53.399373 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5237358b2ce67d84d86ba2e8b9ab0a8f6d7b53ec88071f5b1b5e800c6d1a7110"} err="failed to get container status \"5237358b2ce67d84d86ba2e8b9ab0a8f6d7b53ec88071f5b1b5e800c6d1a7110\": rpc error: code = NotFound desc = could not find container \"5237358b2ce67d84d86ba2e8b9ab0a8f6d7b53ec88071f5b1b5e800c6d1a7110\": container with ID starting with 5237358b2ce67d84d86ba2e8b9ab0a8f6d7b53ec88071f5b1b5e800c6d1a7110 not found: ID does not exist" Feb 16 17:58:53 crc kubenswrapper[4870]: I0216 17:58:53.399401 4870 scope.go:117] "RemoveContainer" containerID="5a2e0609a27a6b11567a7c0fb40357b4d404edc3dc1bded3770ff9115bf32e56" Feb 16 17:58:53 crc kubenswrapper[4870]: E0216 17:58:53.399641 4870 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a2e0609a27a6b11567a7c0fb40357b4d404edc3dc1bded3770ff9115bf32e56\": container with ID starting with 5a2e0609a27a6b11567a7c0fb40357b4d404edc3dc1bded3770ff9115bf32e56 not found: ID does not exist" containerID="5a2e0609a27a6b11567a7c0fb40357b4d404edc3dc1bded3770ff9115bf32e56" Feb 16 17:58:53 crc kubenswrapper[4870]: I0216 17:58:53.399694 4870 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a2e0609a27a6b11567a7c0fb40357b4d404edc3dc1bded3770ff9115bf32e56"} err="failed to get container status \"5a2e0609a27a6b11567a7c0fb40357b4d404edc3dc1bded3770ff9115bf32e56\": rpc error: code = NotFound desc = could not find container \"5a2e0609a27a6b11567a7c0fb40357b4d404edc3dc1bded3770ff9115bf32e56\": container with ID starting with 5a2e0609a27a6b11567a7c0fb40357b4d404edc3dc1bded3770ff9115bf32e56 not found: ID does not exist" Feb 16 17:58:53 crc kubenswrapper[4870]: I0216 17:58:53.402364 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z6z9w"] Feb 16 17:58:53 crc kubenswrapper[4870]: I0216 17:58:53.410934 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z6z9w"] Feb 16 17:58:54 crc kubenswrapper[4870]: E0216 17:58:54.224494 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:58:54 crc kubenswrapper[4870]: I0216 17:58:54.233223 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1efeea5-ad40-4587-be94-bd8da7d9fa85" path="/var/lib/kubelet/pods/b1efeea5-ad40-4587-be94-bd8da7d9fa85/volumes" Feb 16 17:59:05 crc kubenswrapper[4870]: I0216 17:59:05.367646 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:59:05 crc kubenswrapper[4870]: I0216 17:59:05.368521 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:59:09 crc kubenswrapper[4870]: E0216 17:59:09.225878 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:59:22 crc kubenswrapper[4870]: E0216 17:59:22.226075 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:59:33 crc kubenswrapper[4870]: E0216 17:59:33.224783 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:59:35 crc kubenswrapper[4870]: I0216 17:59:35.367391 4870 patch_prober.go:28] interesting pod/machine-config-daemon-cgzwr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:59:35 crc kubenswrapper[4870]: I0216 17:59:35.367752 4870 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:59:35 crc kubenswrapper[4870]: I0216 17:59:35.367830 4870 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" Feb 16 17:59:35 crc kubenswrapper[4870]: I0216 17:59:35.368697 4870 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"31bbd1163ce6eca4377493bd0164a78761deddd73e2a60c81e561bd8ea435ba6"} pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:59:35 crc kubenswrapper[4870]: I0216 17:59:35.368766 4870 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerName="machine-config-daemon" containerID="cri-o://31bbd1163ce6eca4377493bd0164a78761deddd73e2a60c81e561bd8ea435ba6" gracePeriod=600 Feb 16 17:59:35 crc kubenswrapper[4870]: E0216 17:59:35.494908 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:59:35 crc kubenswrapper[4870]: I0216 17:59:35.700569 4870 generic.go:334] "Generic (PLEG): container finished" podID="a3e693e8-f31b-4cc5-b521-0f37451019ab" containerID="31bbd1163ce6eca4377493bd0164a78761deddd73e2a60c81e561bd8ea435ba6" exitCode=0 Feb 16 17:59:35 crc kubenswrapper[4870]: I0216 17:59:35.700618 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" event={"ID":"a3e693e8-f31b-4cc5-b521-0f37451019ab","Type":"ContainerDied","Data":"31bbd1163ce6eca4377493bd0164a78761deddd73e2a60c81e561bd8ea435ba6"} Feb 16 17:59:35 crc kubenswrapper[4870]: I0216 17:59:35.700652 4870 scope.go:117] "RemoveContainer" containerID="fad609847562591995d01b404761ef2f912723b7c6df5a3a9c2782bc74728b1f" Feb 16 17:59:35 crc kubenswrapper[4870]: I0216 17:59:35.701442 4870 scope.go:117] "RemoveContainer" containerID="31bbd1163ce6eca4377493bd0164a78761deddd73e2a60c81e561bd8ea435ba6" Feb 16 17:59:35 crc kubenswrapper[4870]: E0216 17:59:35.701824 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:59:44 crc kubenswrapper[4870]: E0216 17:59:44.224251 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 17:59:48 crc kubenswrapper[4870]: I0216 17:59:48.223804 4870 scope.go:117] "RemoveContainer" containerID="31bbd1163ce6eca4377493bd0164a78761deddd73e2a60c81e561bd8ea435ba6" Feb 16 17:59:48 crc kubenswrapper[4870]: E0216 17:59:48.225244 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 17:59:57 crc kubenswrapper[4870]: E0216 17:59:57.224922 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 18:00:00 crc kubenswrapper[4870]: I0216 18:00:00.166793 4870 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521080-qfw6x"] Feb 16 18:00:00 crc kubenswrapper[4870]: E0216 18:00:00.167639 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1efeea5-ad40-4587-be94-bd8da7d9fa85" containerName="extract-utilities" Feb 16 18:00:00 crc kubenswrapper[4870]: I0216 18:00:00.167727 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1efeea5-ad40-4587-be94-bd8da7d9fa85" containerName="extract-utilities" Feb 16 18:00:00 crc kubenswrapper[4870]: E0216 18:00:00.167746 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1efeea5-ad40-4587-be94-bd8da7d9fa85" containerName="extract-content" Feb 16 18:00:00 crc kubenswrapper[4870]: I0216 18:00:00.167755 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1efeea5-ad40-4587-be94-bd8da7d9fa85" containerName="extract-content" Feb 16 18:00:00 crc kubenswrapper[4870]: E0216 18:00:00.167774 4870 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1efeea5-ad40-4587-be94-bd8da7d9fa85" containerName="registry-server" Feb 16 18:00:00 crc kubenswrapper[4870]: I0216 18:00:00.167784 4870 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1efeea5-ad40-4587-be94-bd8da7d9fa85" containerName="registry-server" Feb 16 18:00:00 crc kubenswrapper[4870]: I0216 18:00:00.168039 4870 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1efeea5-ad40-4587-be94-bd8da7d9fa85" containerName="registry-server" Feb 16 18:00:00 crc kubenswrapper[4870]: I0216 18:00:00.168898 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-qfw6x" Feb 16 18:00:00 crc kubenswrapper[4870]: I0216 18:00:00.172080 4870 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 18:00:00 crc kubenswrapper[4870]: I0216 18:00:00.177123 4870 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 18:00:00 crc kubenswrapper[4870]: I0216 18:00:00.179009 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521080-qfw6x"] Feb 16 18:00:00 crc kubenswrapper[4870]: I0216 18:00:00.285165 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrnw7\" (UniqueName: \"kubernetes.io/projected/3976df34-8626-488c-9f3a-c4c4c531a8bb-kube-api-access-mrnw7\") pod \"collect-profiles-29521080-qfw6x\" (UID: \"3976df34-8626-488c-9f3a-c4c4c531a8bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-qfw6x" Feb 16 18:00:00 crc kubenswrapper[4870]: I0216 18:00:00.285226 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3976df34-8626-488c-9f3a-c4c4c531a8bb-secret-volume\") pod \"collect-profiles-29521080-qfw6x\" (UID: \"3976df34-8626-488c-9f3a-c4c4c531a8bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-qfw6x" Feb 16 18:00:00 crc kubenswrapper[4870]: I0216 18:00:00.285250 4870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3976df34-8626-488c-9f3a-c4c4c531a8bb-config-volume\") pod \"collect-profiles-29521080-qfw6x\" (UID: \"3976df34-8626-488c-9f3a-c4c4c531a8bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-qfw6x" Feb 16 18:00:00 crc kubenswrapper[4870]: I0216 18:00:00.389466 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrnw7\" (UniqueName: \"kubernetes.io/projected/3976df34-8626-488c-9f3a-c4c4c531a8bb-kube-api-access-mrnw7\") pod \"collect-profiles-29521080-qfw6x\" (UID: \"3976df34-8626-488c-9f3a-c4c4c531a8bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-qfw6x" Feb 16 18:00:00 crc kubenswrapper[4870]: I0216 18:00:00.389515 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3976df34-8626-488c-9f3a-c4c4c531a8bb-secret-volume\") pod \"collect-profiles-29521080-qfw6x\" (UID: \"3976df34-8626-488c-9f3a-c4c4c531a8bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-qfw6x" Feb 16 18:00:00 crc kubenswrapper[4870]: I0216 18:00:00.389556 4870 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3976df34-8626-488c-9f3a-c4c4c531a8bb-config-volume\") pod \"collect-profiles-29521080-qfw6x\" (UID: \"3976df34-8626-488c-9f3a-c4c4c531a8bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-qfw6x" Feb 16 18:00:00 crc kubenswrapper[4870]: I0216 18:00:00.390502 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3976df34-8626-488c-9f3a-c4c4c531a8bb-config-volume\") pod \"collect-profiles-29521080-qfw6x\" (UID: \"3976df34-8626-488c-9f3a-c4c4c531a8bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-qfw6x" Feb 16 18:00:00 crc kubenswrapper[4870]: I0216 18:00:00.395692 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3976df34-8626-488c-9f3a-c4c4c531a8bb-secret-volume\") pod \"collect-profiles-29521080-qfw6x\" (UID: \"3976df34-8626-488c-9f3a-c4c4c531a8bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-qfw6x" Feb 16 18:00:00 crc kubenswrapper[4870]: I0216 18:00:00.410389 4870 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrnw7\" (UniqueName: \"kubernetes.io/projected/3976df34-8626-488c-9f3a-c4c4c531a8bb-kube-api-access-mrnw7\") pod \"collect-profiles-29521080-qfw6x\" (UID: \"3976df34-8626-488c-9f3a-c4c4c531a8bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-qfw6x" Feb 16 18:00:00 crc kubenswrapper[4870]: I0216 18:00:00.501661 4870 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-qfw6x" Feb 16 18:00:00 crc kubenswrapper[4870]: I0216 18:00:00.982370 4870 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521080-qfw6x"] Feb 16 18:00:01 crc kubenswrapper[4870]: I0216 18:00:01.222854 4870 scope.go:117] "RemoveContainer" containerID="31bbd1163ce6eca4377493bd0164a78761deddd73e2a60c81e561bd8ea435ba6" Feb 16 18:00:01 crc kubenswrapper[4870]: E0216 18:00:01.223174 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 18:00:01 crc kubenswrapper[4870]: I0216 18:00:01.960325 4870 generic.go:334] "Generic (PLEG): container finished" podID="3976df34-8626-488c-9f3a-c4c4c531a8bb" containerID="dff7ff0f4ea5b7df2cfe667ce895d29e9724bb25515e78a683b4494300e41e0e" exitCode=0 Feb 16 18:00:01 crc kubenswrapper[4870]: I0216 18:00:01.960579 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-qfw6x" event={"ID":"3976df34-8626-488c-9f3a-c4c4c531a8bb","Type":"ContainerDied","Data":"dff7ff0f4ea5b7df2cfe667ce895d29e9724bb25515e78a683b4494300e41e0e"} Feb 16 18:00:01 crc kubenswrapper[4870]: I0216 18:00:01.960608 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-qfw6x" event={"ID":"3976df34-8626-488c-9f3a-c4c4c531a8bb","Type":"ContainerStarted","Data":"0d0e45d1b47c204c604878ccc0f934f8793bfe98502952609c78f76b95ae5991"} Feb 16 18:00:03 crc kubenswrapper[4870]: I0216 18:00:03.359691 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-qfw6x" Feb 16 18:00:03 crc kubenswrapper[4870]: I0216 18:00:03.454120 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3976df34-8626-488c-9f3a-c4c4c531a8bb-secret-volume\") pod \"3976df34-8626-488c-9f3a-c4c4c531a8bb\" (UID: \"3976df34-8626-488c-9f3a-c4c4c531a8bb\") " Feb 16 18:00:03 crc kubenswrapper[4870]: I0216 18:00:03.454568 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3976df34-8626-488c-9f3a-c4c4c531a8bb-config-volume\") pod \"3976df34-8626-488c-9f3a-c4c4c531a8bb\" (UID: \"3976df34-8626-488c-9f3a-c4c4c531a8bb\") " Feb 16 18:00:03 crc kubenswrapper[4870]: I0216 18:00:03.454832 4870 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrnw7\" (UniqueName: \"kubernetes.io/projected/3976df34-8626-488c-9f3a-c4c4c531a8bb-kube-api-access-mrnw7\") pod \"3976df34-8626-488c-9f3a-c4c4c531a8bb\" (UID: \"3976df34-8626-488c-9f3a-c4c4c531a8bb\") " Feb 16 18:00:03 crc kubenswrapper[4870]: I0216 18:00:03.455065 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3976df34-8626-488c-9f3a-c4c4c531a8bb-config-volume" (OuterVolumeSpecName: "config-volume") pod "3976df34-8626-488c-9f3a-c4c4c531a8bb" (UID: "3976df34-8626-488c-9f3a-c4c4c531a8bb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 18:00:03 crc kubenswrapper[4870]: I0216 18:00:03.455341 4870 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3976df34-8626-488c-9f3a-c4c4c531a8bb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 18:00:03 crc kubenswrapper[4870]: I0216 18:00:03.459674 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3976df34-8626-488c-9f3a-c4c4c531a8bb-kube-api-access-mrnw7" (OuterVolumeSpecName: "kube-api-access-mrnw7") pod "3976df34-8626-488c-9f3a-c4c4c531a8bb" (UID: "3976df34-8626-488c-9f3a-c4c4c531a8bb"). InnerVolumeSpecName "kube-api-access-mrnw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:00:03 crc kubenswrapper[4870]: I0216 18:00:03.459973 4870 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3976df34-8626-488c-9f3a-c4c4c531a8bb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3976df34-8626-488c-9f3a-c4c4c531a8bb" (UID: "3976df34-8626-488c-9f3a-c4c4c531a8bb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 18:00:03 crc kubenswrapper[4870]: I0216 18:00:03.556781 4870 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrnw7\" (UniqueName: \"kubernetes.io/projected/3976df34-8626-488c-9f3a-c4c4c531a8bb-kube-api-access-mrnw7\") on node \"crc\" DevicePath \"\"" Feb 16 18:00:03 crc kubenswrapper[4870]: I0216 18:00:03.556828 4870 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3976df34-8626-488c-9f3a-c4c4c531a8bb-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 18:00:03 crc kubenswrapper[4870]: I0216 18:00:03.978374 4870 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-qfw6x" event={"ID":"3976df34-8626-488c-9f3a-c4c4c531a8bb","Type":"ContainerDied","Data":"0d0e45d1b47c204c604878ccc0f934f8793bfe98502952609c78f76b95ae5991"} Feb 16 18:00:03 crc kubenswrapper[4870]: I0216 18:00:03.978440 4870 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d0e45d1b47c204c604878ccc0f934f8793bfe98502952609c78f76b95ae5991" Feb 16 18:00:03 crc kubenswrapper[4870]: I0216 18:00:03.978390 4870 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-qfw6x" Feb 16 18:00:04 crc kubenswrapper[4870]: I0216 18:00:04.439991 4870 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521035-jkj2v"] Feb 16 18:00:04 crc kubenswrapper[4870]: I0216 18:00:04.450165 4870 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521035-jkj2v"] Feb 16 18:00:06 crc kubenswrapper[4870]: I0216 18:00:06.237450 4870 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f" path="/var/lib/kubelet/pods/d575ed8b-2d09-45e8-ad80-c6c6b8f9db9f/volumes" Feb 16 18:00:11 crc kubenswrapper[4870]: E0216 18:00:11.225512 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 18:00:14 crc kubenswrapper[4870]: I0216 18:00:14.223383 4870 scope.go:117] "RemoveContainer" containerID="31bbd1163ce6eca4377493bd0164a78761deddd73e2a60c81e561bd8ea435ba6" Feb 16 18:00:14 crc kubenswrapper[4870]: E0216 18:00:14.223966 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 18:00:22 crc kubenswrapper[4870]: E0216 18:00:22.224935 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 18:00:28 crc kubenswrapper[4870]: I0216 18:00:28.223474 4870 scope.go:117] "RemoveContainer" containerID="31bbd1163ce6eca4377493bd0164a78761deddd73e2a60c81e561bd8ea435ba6" Feb 16 18:00:28 crc kubenswrapper[4870]: E0216 18:00:28.224692 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 18:00:33 crc kubenswrapper[4870]: E0216 18:00:33.225648 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89" Feb 16 18:00:42 crc kubenswrapper[4870]: I0216 18:00:42.223777 4870 scope.go:117] "RemoveContainer" containerID="31bbd1163ce6eca4377493bd0164a78761deddd73e2a60c81e561bd8ea435ba6" Feb 16 18:00:42 crc kubenswrapper[4870]: E0216 18:00:42.224557 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cgzwr_openshift-machine-config-operator(a3e693e8-f31b-4cc5-b521-0f37451019ab)\"" pod="openshift-machine-config-operator/machine-config-daemon-cgzwr" podUID="a3e693e8-f31b-4cc5-b521-0f37451019ab" Feb 16 18:00:44 crc kubenswrapper[4870]: E0216 18:00:44.224747 4870 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-6hkdm" podUID="34a86750-1fff-4add-8462-7ab805ec7f89"